text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
ISOLATION, MOLECULAR & PHYSIOLOGICAL CHARACTERIZATION OF SULFATE-REDUCING, HETEROTROPHIC, DIAZOTROPHS Nitrogen (N2) fixation is the process by which N2 gas is converted to biologically reactive ammonia, and is a cellular capability widely distributed amongst prokaryotes. This process is essential for the input of new, reactive N in a variety of environments. Heterotrophic bacterial N fixers residing in estuarine sediments have only recently been acknowledged as important contributors to the overall N budget of these ecosystems and many specifics about their role in estuarine N cycling remain unknown, partly due to a lack of knowledge about their autecology and a lack of cultivated representatives. Nitrogenase reductase (nifH) gene composition and prevalence in Narragansett Bay sediments has revealed that two distinct phylogenetic groups dominate N-fixation. Analysis of nifH transcripts has revealed one active group to be the Desulfovibrionaceae, belonging to the Deltaproteobacteria. We see nifH expression from this group across sampling sites and times, despite the fact that Narragansett Bay sediments are replete with combined N, which is thought to inhibit N fixation in the environment. Here were present the genomic and physiological data in relation to N2 fixation by two heterotrophic members of the Desulfovibrionaceae, isolated from sediments of the Narragansett Bay estuary in 2010 and 2011, respectively: Desulfovibrio sp. NAR1 and Desulfovibrio sp. NAR2.To elucidate how nitrogenase activity in these organisms responds to the presence of different sources of combined N, and to link observed physiology with genomic potential (i.e. gene content), we performed a two-part study that coupled high-throughput genome sequencing and analysis with physiological investigations of growth on different N sources with N fixation rate measurements. The genomes of the two diazotrophic Narragansett Bay Desulfovibrio isolates (NAR1 and NAR2) were sequenced using a high throughput platform, subsequently assembled, annotated, and investigated for genes related to N fixation and overall N metabolism which were then compared across 34 additional Desulfovibrio genomes which were publicly available. To link findings at the molecular level with observations at the physiological level, N fixation rates were measured using the acetylene reduction assay (ARA) under conditions free of reactive N, and under the following combined N conditions: 12 mM urea, 12 mM NO3, and 12 mM NH3. Both isolates can sustain growth by N2 fixation in the absence of biologically available N and our data indicate that nitrogenase activity is completely inhibited by the presence of ammonia, yet uninhibited by nitrate and urea, which are other forms of combined N found in Narragansett Bay. This agrees with observations made at the genome level, as neither our isolates nor the other Desulfovibrio examined in comparison appeared to have the genetic capability to use NO3 or urea catabolically to meet cellular N demands. This study indicates that the Desulfovibrionaceae are restricted in terms of the N sources they are capable of using, and that this may be a factor contributing to the observed N fixation by this group in sediments that are not limited for sources of combined N. Genome sequencing also reveals both isolates to be metabolically versatile and unique. The NAR1 isolate possesses genes involved in bacterial mercury methylation, and displays near obligate biofilm formation. Genes were also found in the NAR1 isolate which suggest the involvement of c-di-GMP in cell-to-cell communication and biofilm formation. This is particularly interesting since biofilm formation and quorum sensing is not well characterized among the Desulfovibrio, despite biofilm formation being displayed by many members of this genus. While investigating the role of these organisms as important contributors of fixed N in Narragansett Bay, it was critical that we examine these additional aspects of their metabolism in order to gain a better understanding of controls on growth that may also impact biomass and the ability of these organisms to achieve significant rates of N fixation in the environment. Narragansett Bay Desulfovibrio isolates (NAR1 and NAR2) were sequenced using a high throughput platform, subsequently assembled, annotated, and investigated for genes related to N fixation and overall N metabolism which were then compared across 34 additional Desulfovibrio genomes which were publicly available. To link findings at the molecular level with observations at the physiological level, N fixation rates were measured using the acetylene reduction assay (ARA) under conditions free of reactive N, and under the following combined N conditions: 12 mM urea, 12 mM NO 3 -, and 12 mM NH 3 . Both isolates can sustain growth by N 2 fixation in the absence of biologically available N and our data indicate that nitrogenase activity is completely inhibited by the presence of ammonia, yet uninhibited by nitrate and urea, which are other forms of combined N found in Narragansett Bay. This agrees with observations made at the genome level, as neither our isolates nor the other Desulfovibrio examined in comparison appeared to have the genetic capability to use NO 3 -or urea catabolically to meet cellular N demands. This study indicates that the Desulfovibrionaceae are restricted in terms of the N sources they are capable of using, and that this may be a factor contributing to the observed N fixation by this group in sediments that are not limited for sources of combined N. Genome sequencing also reveals both isolates to be metabolically versatile and unique. The NAR1 isolate possesses genes involved in bacterial mercury methylation, and displays near obligate biofilm formation. Genes were also found in the NAR1 isolate which suggest the involvement of c-di-GMP in cell-to-cell communication and biofilm formation. This is particularly interesting since biofilm formation and quorum sensing is not well characterized among the Desulfovibrio, despite biofilm formation being displayed by many members of this genus. While investigating the role of these organisms as important contributors of fixed N in Narragansett Bay, it was critical that we examine these additional aspects of their metabolism in order to gain a better understanding of controls on growth that may also impact biomass and the ability of these organisms to achieve significant rates of N fixation in the environment. INTRODUCTION The Role of Heterotrophic Diazotrophs in Estuarine Sediments Estuarine sediments, such as those in Narragansett Bay, have historically been shown to be major regions of nitrogen (N) removal via the activity of denitrifying bacteria (Nixon et al. 1996). Denitrification results in the release of nitrogen (N 2 ) gas and the loss of biologically reactive N. The opposing process of N fixation converts N 2 gas to biologically reactive ammonia, and has been demonstrated to occur in Narragansett Bay sediments by anaerobic diazotrophs (Fulweiler et In Narragansett Bay the most active of these heterotrophic diazotrophs, based on the number of nifH (dinitrogenase reductase) transcripts, belong to two distinct phylogenetic groups; members of the Desulfovibrionaceae and the Geobacteraceae ). In order to gain a better understanding of the ecology of these active N fixers, we attempted to isolate representatives of both phylogenetic groups from Narragansett Bay sediments. We were initially successful in isolating two In other words, shifting from questions regarding "who, what, and where" to questions regarding "why and how" these organisms fix N in their respective habitats. Anaerobic heterotrophic diazotrophs, specifically members of the Desulfovibrionaceae, present a unique challenge because currently there is no model or set of supporting physiological data that exist which explain why N-fixation activity by these organisms is observed in environments which are not limited for combined N. The paradigm for aerobic N fixing bacteria is that combined N represses N fixation Howarth et al. 1988), and this is something that still remains to be investigated in anaerobic diazotrophs. Similarly, there has been very little investigation at the genomic level into the N metabolism of these organisms, their potential ability as a group to assimilate different combined sources of N, or a specific description of controls on their overall N fixation. We hypothesize that members of this genus may not be capable of using certain forms of N in an assimilatory manner and thus would need to continue to fix N under otherwise N replete conditions. This may be an important contributing factor to what has been seen regarding their activity in the environment ( Thesis Motivation and Outline Members of the Desulfovibrio were found to be one of the primary bacterial groups responsible for driving N-fixation activity in Narragansett Bay ). Previously, sulfate-reducing bacteria (SRB) as a group have been found to fix N in culture still much yet to be discerned regarding their metabolism, nutrient cycling properties, and responses or adaptations to different environmental stressors, particularly from a genomic perspective and more specifically as those factors pertain to N fixation. There is a similar lack of information regarding N fixation rate data among the Desulfovibrio. Although there are previous studies examining N fixation rates for heterotrophic diazotrophs in sediments using the acetylene reduction assay (McGlathery et al. 1998;Welsh et al. 1996b), these studies have used environmental samples and thus are examining the overall N fixation ability of a mixed microbial consortia and are not able to establish cell-specific rates of N 2 fixation, or a connection between those rates and the responsible cells. This lack of evidence is most likely due to the fact many of these organisms remain uncultured, and where there are cultivated representatives the potential contributions to heterotrophic N fixation by those specific organisms remain largely disregarded. The lack of cultivated Desulfovibrio representatives and corresponding analysis of their genomic capabilities in regards to N fixation and metabolism, combined with a lack of organism specific physiological data regarding N fixation rates and rate responses to environmental N conditions, have provided motivation for the current study. Introduction: Estuarine sediments typically exhibit a nitrogen (N) cycle that is dominated by processes of N removal, such as coupled nitrification and denitrification (1). The opposing process of N 2 fixation by prokaryotic organisms, known as diazotrophs, is the primary source of reactive N to the world's oceans and consequently acts as a control on both the N budget and primary production in many marine ecosystems Historically, N fixation has been a process primarily attributed to cyanobacterial species residing in the water column, although genetic potential to fix N is widely distributed amongst prokaryotes, including members of bacteria and archaea (4). Due to the energetically unfavorable nature of biological N fixation and typically abundant concentrations of combined N found in estuarine systems, heterotrophic N fixation was previously thought to be an inconsequential process in these environments (5). However, recent research involving direct observations of N 2 flux across the sediment-water interface in a variety of marine, salt marsh, and sea grass systems support that heterotrophic N fixation is an important source of reactive N in these systems, and has begun to alter the historically held conclusions regarding the role of heterotrophic diazotrophs (6-10). Additionally, significant N fixation has been documented in waters where cyanobacteria are not believed to be present or active (11)(12)(13). reducers and have been shown to fix N in culture (15). Members of this genus are also noted for their ability to perform a wide variety of metabolic functions, including the reduction of sulfate to sulfur and sulfide species, the ability to utilize recalcitrant carbon sources, the ability to fix N, and the ability to transform certain metal species (15)(16)(17)(18) Additionally, it has been known for some time that many sulfate reducing bacteria (SRB), particularly members of the Desulfovibrio, have the genetic potential to fix N (23). These organisms have been shown to fix N in a laboratory setting (15,24) and in a variety of habitats including coral reefs, photosynthetic microbial mats, mangrove sediments, sea grass rhizospheres (25-27), shallow estuarine sediments (6,14,28), bioturbated sediments (29), and salt marshes (9). There is further evidence supporting that these organisms play a critical role in supplying fixed N to their environment, particularly in anaerobic or anoxic sediments (14,30) and benthic sediments which are N deficient (31)(32)(33) Acetylene Reduction Assay The NA of Desulfovibrio sp. NAR1 was measured under various combined N conditions. 10 mL cultures of NAR1 containing 2 g of 1 mm diameter glass beads in carbonate buffered NBSO, NBSO + 12 mM NH 3 , NBSO + 12 mM NO 3 -, NBSO + 12 mM urea were prepared in either duplicate or triplicate and NA was assessed using the acetylene reduction assay (ARA) and methods described previously by Capone (44). Acetylene was generated in house by reacting calcium carbide with water and was collected in a 1 L Supel-Inert film gas sampling bag. Acetylene was added to the culture tubes to a final concentration of 10 to 20% of the total headspace. A Shimadzu GC8 gas chromatograph (Shimadzu Corporation, Kyoto, Japan), with a 2.5 m long stainless steel column containing Haysep T packing 80/100 mesh, was used to measure ethylene production in all ARAs. The injector and column were set to 130°C and 100°C, respectively. Gas samples of 100 uL were taken from the tube's headspace with a gas tight syringe and immediately injected into the gas chromatograph. Ethylene production was measured over the course of 24 hours. Samples were usually measured 1, 3, 6 and occasionally 24 hours after acetylene was added. Cultures that were part of the same assay were inoculated at the same time using the same parent culture. A set of multiple potential parent cultures was established for each set of experimental tubes; NA was measured in the parent tubes to ensure that the inoculum for the experimental cultures was actively fixing N at the time of inoculation, once NA was established in one member of the parent culture series the next un-tampered parent culture was used to inoculate the experimental culture series. A series of potential parent cultures was necessary, rather than measuring the NA of one culture and using that same culture as a parent inoculum, to mitigate the negative effects of long term acetylene exposure on the growth of NAR1. Cultures were also no longer sterile after being used in the acetylene reduction assay (ARA), and so could not themselves be used as parents. Additionally, measurement of NA in the parent culture was a critical means of determining when the culture was ready to be used as inoculum, since cell enumeration of the NAR1 isolate remains challenging, and as of yet no hard correlation between culture age and NA has been established. After establishing a baseline NA using N limited cultures (Fig. 7 Biomass estimates were ultimately made by extracting DNA from pure cultures grown under N limited conditions, and subsequently using the DNA yield to calculate the total number of genomes present in the culture volume, which was used as representation of cell abundance. Figure 6 shows the log growth of NAR1 under N limited conditions. This organism exhibits very slow growth and very little change in overall biomass when grown under N limited conditions. However, slow growth is not unusual for environmental bacteria such as members of the Desulfovibrio (58), and it is likely that there is some loss of DNA during the extraction process, which would result in a lower reported cell count. Ethylene production by NAR1 (Fig. 7) was first measured on N limited cultures over a period of ~14 days prior to examining the effects of combined nitrogen on NA in the NAR1 isolate. Previous pilot experiments had shown that no NA occurred before the third day after inoculation, so these time points were not included in the study. Figure 7 shows NA for the NAR1 isolate for the same time period in which growth was measured (Fig. 6). These measurements indicate that peak NA in N limited cultures of NAR1 occurs early on in the growth of the organism, late lag or early log phase, which is a trend that had been observed in previous assays (data not shown). Peak NA is within range of that described for other bacterial isolates (59). The observed high activity on Day 5, when the culture is young and the cell count is lower, may be due to the N demands of the organisms as they prepare to enter log phase. It is unclear at this time whether biofilm formation plays a role in increased NA or overall N demand. Since Desulfovibrio biofilms are known to be composed primarily of protein (60,61), this remains a possibility and should be considered in future investigations. Further studies would be needed in order to elucidate what specific factors contribute to timing of peak NA in this isolate. Growth and acetylene reduction of NAR1 under differing combined N conditions After establishing a baseline NA using N limited cultures (see previous section), ethylene production for cultures of NAR1 grown under different combined N treatments was measured using the ARA (Fig. 9) Keeping in mind that replicates in this experiment are separate cultures and that Fig. 8 shows the total number of cells present in these cultures measured at the respective time-point, instead of the same culture with change in cell abundance quantified over time, it is plausible that the lower cell count on day 10 for all treatments except nitrate is due to fewer cells being present in the inoculum used for those cultures compared to the day 8 cultures. Possibly inflated ethylene production rates due to discrepancies in culture density are corrected for by normalizing ethylene production rates to cell abundance. Genome sequencing outputs, assembly and annotation Genome sequencing using the Illumina MiSeq platform was carried out for Desulfovibrio sp. NAR1 and Desulfovibrio sp. NAR2. The number of paired-end, 250bp reads obtained for each isolate was more than 16 million. Phred per-base quality scores of ≥ 10 were reported for NAR1 and NAR2 raw reads, which represent inferred base call accuracy of at least 90% (64,65), for all reads. Ambiguous N bases were not detected in either raw data set. Although these values indicate that the probability of an incorrect base call was minimal for the raw sequences, both data sets were trimmed prior to being used in de novo assemblies. The number of paired-end reads remaining for each isolate after trimming was more than 13 million, which represents a range between 700x and 900x expected genome coverage for each isolate. Phred per-base quality scores of ≥ 30 were reported for NAR1 and NAR2 trimmed reads, which in turn represent inferred base call accuracy of at least 99.9% for all reads. Ambiguous N bases were not detected in either trimmed data set. The average read length after trimming was 200bp and 210bp for NAR1 and NAR2, respectively. These resulting quality values represent an improvement over the raw data sets and indicate that the probability of an incorrect base call is minimal for these sequences, at this point both trimmed data sets were considered to be appropriate for generating de novo assemblies. Statistics for the de novo assemblies of D. sp. NAR1 and D. sp. NAR2 are shown in Table 1 ( Table 1). The NAR1 genome data was assembled first, and used to establish a pipeline for working with the NAR2 data. Both isolate data sets were trimmed and assembled using identical parameters, using all reads in their respective sets. Individual contig coverage values were averaged for both assemblies, and are reported in Table 1. Fold coverage for individual contigs for NAR1 and NAR2 ranged from 339x-909x, and 705x-1,876x, respectively (Supplemental Tables S1 and S2). Comparisons of assembly statistics such as N 50 , N 75 , number of contigs, and longest contigs for these isolates compared to other published draft genomes indicate that good-quality assemblies have been achieved for both isolates (66,67 were ultimately reordered using Mauve alignments (57), as shown in Figure 2. Comparison of isolate genomes using IMG/ER and RAST A combination of IMG/ER, RAST, and manual annotation using BLAST tools (50, 69, 70) was used to annotate both isolate draft genomes. IMG/ER was used as the primary annotator and comparison tool, RAST was used sparingly, and manual annotation was used primarily where automatic annotations seemed questionable, were missing, or when genes were of special interest to this study. Total coding bases were split into PCGs, RCGs, and hypothetical PCGs. This data, along with the breakdown of RCGs into those accounting for rRNA, tRNA, and other RNA genes, is shown in Table 2 along with the same data for two closely related Desulfovibrio species for each isolate; D. desulfuricans str. ND132 and D. piezophilus for NAR1, and D. desulfuricans subsp. aestuarii and D. alaskensis str. G20 for NAR2. The more phylogenetically distant representative, D. vulgaris str. Miyazaki F, was also included as an additional point of comparison. Tabulated attributes are similar overall between isolates and the previously sequenced genomes listed. However, the NAR1 genome is note-worthy with respect to the two annotated clustered regularly interspaced short palindromic repeat regions (CRISPRs), where 3 of the other genomes included in Table 2 have only one CRISPR region and 3 do not have any. CRISPR-Cas systems play a role in adaptive immunity against phages and other invading genetic elements and are present in approximately 40% of sequenced eubacteria genomes and 90% of archaea genomes (71,72). It is interesting to note that NAR2 lacks a CRISPR-Cas system, particularly since these organisms were isolated from the same site and would presumably have been exposed to similar phage attacks and the same foreign DNA in the environment. Isolate PCGs with KEGG, COG, Pfam and TIGRfam (73-76) annotations in IMG/ER were compared across subcategories in terms of number of genes contained in each subcategory and the percentage of total PCGs with pathway association that the gene number represents. The percentage is not particularly meaningful on its own because it is taken from the total number of genes with pathway association, which can vary depending on which database is being considered, and it is not a representation of total PCGs. Even so, the percentage of genes represented in each of the various subcategories remained relatively consistent between the two isolates. So from a broad, overall standpoint, the two isolates look similar in terms of gene content with function assignments. However, not all possible metabolic subcategories are represented, and details regarding gene content rather than overall gene number cannot be assessed in this manner. The RAST database uses SEED (69) This difference is in agreement with the observed exudate and biofilm production in NAR1, which is not observed at all in the NAR2 isolate. The number of genes assigned to motility and chemotaxis subsystems is also of interest; the NAR1 isolate has 136 genes assigned to this subcategory, with 100 associated with flagellar motility and 36 being assigned to bacterial chemotaxis. NAR2, comparatively, has 94 genes assigned to motility and chemotaxis, with all of these genes being associated with flagellar motility. This difference between the two isolates in terms of genes involved in chemotaxis could also be related to the observed biofilm formation in NAR1, and could be part of the pathway that signals biofilm formation. As this pathway is not yet described among the Desulfovibrio, these genes should be considered as targets for future studies involving biofilm formation in this genus. (84). So although the presence of predicted betalactamases in these isolates may not be surprising in regards to environment or genus assignment, it should still be noted and considered as a possible target for future physiological studies, especially since the potential human pathogenicity of these isolates remains unknown. The NAR1 isolate in particular should be considered, as it contains additional putative genes for colicin V production and fosfomycin resistance. The presence of these genes has been observed and annotated in other Desulfovibrio, but there has yet to be physiological confirmation of colicin V production in this genus, while fosfomycin resistance has already been observed in some Desulfovibrio species with clinical relevance (85). Whole genome comparison and alignments The draft genomes of isolates Desulfovibrio sp. NAR1 and Desulfovibrio sp. NAR2 were compared to all complete or well-annotated draft Desulfovibrio genomes available in GenBank or IMG databases (50, 86) at the time of this study (for a list of genomes see Supplemental Table S3), which was a total of 34 additional genomes. A multi-gene phylogenetic tree ( Figure 1) for all 36 Desulfovibrio representatives was constructed using 20 different vertically transferred genes (Table S4) alignment, suggest that this genome contains a significant amount of sequence variability compared with D. alaskensis str. G20, which is not surprising as G20 is only 89% similar to NAR2 at the 16s level. The more closely related organisms to NAR2 did not have closed genomes and so could not be used as references in this assessment. The NAR1 alignment contains less white space than the NAR2 alignment, which agrees with the higher level of sequence similarity, 95%, between NAR1 and D. piezophilus at the 16s level. In both alignments note the "X" pattern formed by connected LCBs, this typically occurs at the origin of replication in aligned genomes. The number, arrangement, and heights of similarity profiles in the whole genome alignments of both environmental isolates are indicative of organisms related at the genus level, but not at the species or strain level. Gene network analysis Evolutionary gene networks (87) were used to compare the genomes of the two environmental isolates, three of their closest relatives, and D. vulgaris Hildenborough, whose genome has been well studied and serves as an additional point of comparison. Aspo-2 were included as part of the NAR1 cluster, and the genomes of D. acrylicus, D. desulfuricans susbsp. aestuarii and D. alaskensis str. G20 were included as part of the NAR2 cluster. The initial gene network was run using parameters discussed previously in the Methods section, and results were subsequently filtered to select networks that consisted of only NAR1 and NAR2 (Fig 3.2), only NAR2 ( Fig. 1, and only 87% 16s sequence similarity between the two isolates, these networks could indicate proteins that confer specific benefits for survival in Narragansett Bay sediments. The fact that some of these genes (e.g. beta-lactamases, DMTs) are involved in bacterial defense, bacterial detoxification (glyoxalase family proteins) or as of yet have an un-described function but are exclusively shared between these two isolates, support this hypothesis. Because of this potential connection these proteins should be considered as targets for future investigations, especially those involving transcriptomic or gene expression analysis. The filtered network containing only NAR1 had approximately 650 connected components (see Appendix A for a list of corresponding proteins), the majority of these connected components consisted of proteins involved in signal transduction and amino acid transport, with a smaller portion of the overall networks consisting of proteins involved in bacterial defense. Components of particular interest are shown in Fig 3.3, which include a putative sensory box/GGDEF family protein network (Fig 3.3 A), a putative diguanylate cyclase and receptor proteins network (Fig 3.3 B), and a periplasmic binding and signal transduction proteins network (Fig 3.3 C), as they have implications in biofilm formation in NAR1. These proteins and their potential role in this isolate's biofilm production will be discussed further in a later section. Additional proteins found to be unique to NAR1 primarily had to do with signal transduction and amino acid transport, which is suggestive of involvement in biofilm formation and exudate production. The fact that these proteins network exclusively from any predicted proteins in NAR2 agrees with what we have observed at the physiological level, with biofilm and exudate production being restricted to NAR1 and not observed at all in NAR2. There were only 2 connected components that consisted of just NAR2 (Fig 3.4), non-specific predicted membrane proteins (Fig. 3.4 A), and bacteriophage head to tail connecting proteins (Fig. 3.4 B). The two non-specific predicted membrane proteins (Dn2DRAFT 02896, Dn2DRAFT 02903) share the highest degree of BLAST homology using BLASTX with a hypothetical protein in Thiocapsa marina, a purple sulfur bacterium. Dn2DRAFT 02896 shares 59% amino acid identity across 97% of the query, and Dn2DRAFT 02903 shares 60% amino acid identity across 98% of the query with a hypothetical protein (Seq ID: ref|WP_007193013.1) in T. marina. All BLAST results with a high enough degree of amino acid similarity to be of interest (≥ 60% sequence similarity) were to other hypothetical proteins, primarily from betaproteobacteria, and so did not reveal any insight as to the possible function of this protein in NAR2. Nitrogen fixation Of the 34 representatives used in comparison to the environmental isolates, only 7 representatives lack the nif operon (Table 3). Only one representative, Desulfovibrio sp. U5L has genes for an alternate Fe-Fe nitrogenase. Both environmental isolates NAR1 and NAR2 have a full nif operon (Fig. 4), with the arrangement of their N fixation gene cluster being similar to those of their closest Nfixing relatives. There is also supporting physiological evidence that both isolates fix N. The presence of an iron-molybdenum nitrogenase appears to be a shared characteristic for this representative group of Desulfovibrio. In addition to the examination of N fixation, an analysis of metabolism of other N substrates (ammonia, nitrate, and urea) was performed and discussed in the following sections. The analysis of additional aspects of N metabolism in these isolates is critical to improving our understanding of why D. sp. NAR1 and D. sp. NAR2 exhibit the N fixation behavior we have observed in Narragansett Bay, as an inability to use other sources of N that are commonly found in the environment could account for a continued need to fix N. To do that it is important for us to be able to take the physiology discussed in the previous sections, and connect it with related functional gene content, which is discussed below. Urea metabolism Currently, the most attention regarding bacterial urea metabolism is given to organisms that make up mammalian gut consortia and intestinal human pathogens, and little focus has been placed on the urea metabolism of environmental representatives like the Desulfovibrio, or on sulfate-reducing bacteria in general. However, there have been some examinations of the uptake and metabolism of urea by environmental bacteria and phytoplankton (88,89). These studies have shown that rates of bacterial urea uptake in the environment are highly variable, genes for urea transport and catabolism are not wide-spread amongst bacteria, and that other forms of N are generally preferred to urea, which could be due to the fact that urea catabolism is an energetically expensive process. Although some members of the Desulfovibrio are known to have urea transporters and ureases, little is known about the fate of urea once it enters a Desulfovibrio cell. There exist at least four families of transporters that facilitate selective permeation of urea: an ATP-dependent ABC type urea transporter (90), an ion motive force-dependent urea transporter (91), an acid-activated urea channel that belongs to the urea/amide channel family (92), and the urea transporter (UT) family, this last type being the most widely distributed family. UT members are found in bacteria, fungi, insects and vertebrates (91,(93)(94)(95)(96). In many bacteria and eukaryotes, urea in the cell can be broken down to ammonia and CO 2 by a urease. Some bacteria and eukaryotes also use urea amidolyases (UALase) to decompose urea (88). A crystalline structure for a UT family urea transporter from Desulfovibrio vulgaris Hildenborough is available and its activity and mechanism of action have been proven in vivo (97). However, subsequent urea metabolism after transport has not been thoroughly investigated. In this study, an examination using IMG/ER, with visual sequence confirmation in Geneious, and further confirmation using tblastn revealed that of the has an ability to use urea catabolically. These findings in the genome data agree with what has been observed at the physiological level for both isolates, to the extent that both isolates fix N even in the presence of urea, which supports the conclusion that they cannot catabolize urea. However, the NAR1 isolate has exhibited increased NA in the presence of urea, which would seem to indicate that the isolate has some means of sensing its presence. It would appear that any possible method NAR1 could be employing to sense urea and/or transport it into the cell is not a part of described urea metabolism or transport in bacteria. Since urea metabolism in environmental representatives is currently not very well described, further physiological and molecular investigations are needed to elucidate the mechanism and response seen in NAR1. The fact that neither isolate is predicted to be capable of catabolizing urea does not make them unique amongst the Desulfovibrio, or amongst the eubacteria in general. Ammonia metabolism Ammonium is known to be the most universally utilized source of biologically available N, and is taken up preferentially by estuarine microbes (89). Accordingly, we would expect to find genes for ammonium uptake and incorporation in all Desulfovibrio representatives. An examination of environmental isolates and additional representatives revealed that all Desulfovibrio, with the exception of D. cuneatus, possess at least one copy of the ammonium transporter (amt, TIGR accession: TIGR00836) ( Table 3). It is possible that because D. cuneatus is a draft, the ammonium transporter was either missed in annotation or is missing from the assembly, and that the organism may in fact have the transporter. The majority of representatives have multiple copies of the ammonium transporter, where both NAR1 and NAR2 have a single copy, making them slightly atypical in this regard. There are, however, 4 additional representatives that also have a single copy of the transporter. A BLAST search did not reveal any additional copies of the ammonium transporter in either isolate, but it is possible that both isolates could have an additional copy/copies of the transporter that are missing from the current assemblies, or that they have a different protein acting as ammonium transporter that has not yet had that function formally assigned to it. The ammonium transporter for NAR1 is located on DESnar1_contig6, at 122,261-123,619bp, in the forward direction. It is immediately followed by a copy of N regulatory protein P-II, two hypothetical proteins, and a copy of glutamate synthase ~3.5kb downstream, an arrangement which makes sense in terms of nitrogen regulation and activity which has been seen in other bacteria (34,36,38). In the NAR2 isolate, the ammonium transporter is located on DESnar2_contig2 at 904,931-905,257bp, in the forward direction. It is immediately preceded by a copy of N regulatory protein P-II, and immediately followed by an isocitrate dehydrogenase and a protein disulfide isomerase. NAR2 does have a copy of glutamate synthase, however it is located on a different contig. Whether ammonia is used directly from the environment or is derived from other N sources, its assimilation involves metabolites. Some metabolites, such as 2oxoglutarate, signal N sufficiency or deficiency to the regulatory apparatus (98). To confirm the potential for ammonia uptake and incorporation in both isolates, additional proteins involved in the N assimilatory pathway were examined in both isolates and the additional Desulfovibrio representatives ( Table 3). The N regulatory protein P-II is a 2-oxoglutarate (2OG) sensor, which is involved in the adenylation cascade that regulates the activity and concentration of glutamine synthetase (GS), in response to N source availability (99). The majority of the Desulfovibrio genomes examined here, including both environmental isolates, have between 2-4 copies of this protein. NAR1 has 4 copies of the protein, with two copies located side by side in between nifH and nifD on DESnar1_contig13, and a third copy located near the previously mentioned ammonium transporter. The fourth copy is located on DESnar1_contig1 at 248,250-248,591 in the forward direction. It is surrounded on either side by hypothetical proteins, further upstream are proteins involved in cellular respiration and downstream are proteins involved in the shikimate pathway. NAR2 has 3 copies of the N regulatory protein P-II, with two copies located side by side and preceded immediately by nifH and followed immediately by nifD, the same arrangement seen in NAR1. The third copy is located on DESnar2_contig2 and was previously mentioned in relation to the ammonium transporter, which immediately follows this copy of N regulatory protein P-II. The locations of all copies of the P-II protein in both isolates make sense in terms of transcription, regulation, and activity, given the proximity to other genes involved in N metabolism. Because the majority of prokaryotes possess the glutamine synthetase (GS)/glutamate synthase (GOGAT) pathway of assimilation, both environmental isolates were assessed for components of this pathway. The isolates were also assessed for an alternate pathway involving the NADP-linked glutamate dehydrogenase, which catalyzes the amination of 2-oxoglutarate to form glutamate. NAR1 has two copies of glutamine synthetase, the first copy is annotated as being a type III glutamine synthetase, and the second is annotated as a type I glutamine synthetase, both of which have been found previously in prokaryotes (100,101). NAR1 also has all subunits for the NADPH type GOGAT and an NAD-specific glutamate dehydrogenase. NAR2 has a typical prokaryotic type 1 glutamine synthetase, all subunits of the NADPH GOGAT, and a glutamate/leucine dehydrogenase. Both environmental isolates appear to have a complete GS/GOGAT system of ammonia assimilation, as well as a glutamate dehydrogenase. They both possess the critically important N regulatory protein P-II, with copies of this gene found at genomic locations that make sense in terms of N sensing and regulation. This provides evidence that both isolates have a predicted means of sensing ammonia in the cell, a means of assimilating it, and a means of signaling the regulation of other genes involved in N metabolism under differing N conditions. These findings support our physiological observations, in that neither isolate exhibited NA in the presence of ammonia. Nitrate metabolism No member of the Desulfovibrio examined here, including the environmental isolates, has a predicted means of assimilating nitrate (Nas-type nitrate reductases) ( However, the genes responsible for biofilm formation in these organisms have not been conclusively identified or well-studied, and cell-to-cell communication and quorum sensing pathways involved in biofilm formation for these organisms remains unclear. An examination of NAR1 gene annotations in IMG/ER and Geneious revealed no genes belonging to the well described lux family of quorum sensing genes (108,109), these results were then confirmed using a tblastn analysis with the assembled NAR1 contigs as the database and Lux proteins from Vibrio fischeri as the queries. Putative genes involved in biofilm formation in NAR1 were ultimately discovered using a filtered evolutionary gene network (87) (Fig. 3.3). The majority of these genes belong to a family of diguanylate cyclases with GGDEF (110) domains, and c-di-GMP receptor domain proteins. Cyclic di-GMP (c-di-GMP) is a bacterial second messenger that is widely utilized by bacteria, with more than 80% of sequenced bacteria predicted to use this signal (111,112). C-di-GMP controls a variety of phenotypes, including biofilm formation, motility, and virulence in multiple bacteria (111,113,114). The fact that the predicted diguanylate cyclases in NAR1 did not network with proteins from any other closely related Desulfovibrio representatives support the hypothesis that these proteins may serve a unique function in NAR1. Given the documented role of diguanylate cyclases and c-di-GMP in biofilm formation, and that NAR1 is unique in its near obligate biofilm lifestyle when compared with NAR2 and other representatives of the Desulfovibrio, it is possible that the unique function served by these proteins is coordination of biofilm formation in NAR1. These genes and the proteins they code for should be further investigated to confirm any role in biofilm formation in this isolate. A separate examination of c-di-GMP levels in NAR1 and other biofilm-forming Desulfovibrio during biofilm growth and during planktonic growth should be considered in order to elucidate the role of c-di-GMP as a signaling molecule for biofilm formation in these organisms. Mercury methylation Mercury (Hg) is a pervasive global pollutant known to found in Narragansett Bay sediments (79); in its methylated form (CH 3 Hg + ), it bioaccumulates and is highly toxic to humans and other organisms (115). Unlike inorganic forms of Hg, which originate from atmospheric deposition and point discharge, CH 3 Hg + is generated in the environment by microorganisms. Hg methylation is largely restricted to the proteobacteria and primarily to anaerobic organisms (116). Sulfate-reducing bacteria, such as the Desulfovibrio, are the main producers of CH 3 Hg + (117,118), although iron-reducing bacteria and methanogens can also be involved (119, 120). The genetic basis for bacterial mercury methylation was recently described by Parks, et al. (18). Because some of the closest relatives to NAR1 are confirmed Hg methylators and because Narragansett Bay sediments are known to contain Hg, the draft genomes of both environmental isolates were searched for hgcA and hgcB, the genes required for bacterial Hg methylation (18). The amino acid sequences for both Hg methylation proteins from D. desulfuricans str. ND132 were used as queries to search the draft genomes of NAR1 and NAR2 using tblastn. No homologs for these proteins were found in the genome of NAR2, however homologs were found in the genome of NAR1 (Fig. 5) Fig X. N fixation gene clusters of Narraganset Bay isolates (black arrows) and three closesly related Desulfovibrio representatives. Color-coded arrows indicate coding sequences (CDS) and their orientatio Other/unknown Nitrogenase activity of NAR1 grown under N limited conditions. Reported as nmols C 2 H 4 produced per cell per day, measured using the acetylene reduction assay. Error bars represent one standard deviation from sample mean. Nitrogenase activity is reported as nmols C 2 H 4 produced per cell per day, measured using the acetylene reduction assay. Error bars represent one standard deviation from sample mean. Cultures with added 12 mM urea were found to reduce acetylene to ethylene, again with peak ethylene production occurring 10 days after inoculation but with rates that were slightly higher than the reactive-N free control and nitrate sets. Increased NA for NAR1 in the presence of urea is something that was observed in earlier pilot experiments, and is a result for which there is currently no explanation or known cause. Cultures with added 12 mM ammonia were not found to reduce acetylene to ethylene, exhibiting no measureable NA, which agrees with what is widely accepted regarding the behavior of the nitrogenase enzyme . The genomes of both NAR1 and NAR2 isolates were further found to lack genes necessary for catabolism of either nitrate or urea, but were found to possess genes necessary for assimilating and catabolizing ammonia and deaminating amino acids, which agrees with our ARA based observations. Additionally, these gene-level observations held true for the majority of the Desulfovibrio genomes that were assessed in comparison to our environmental isolates, suggesting that an inability to utilize forms of reactive N other than ammonia may be characteristic of the Desulfovibrio. This inability could in turn be part of why we observe N fixation by this group even in environments that are not limited for sources of combined N, such as Narragansett Bay.
9,899.4
2015-01-01T00:00:00.000
[ "Biology" ]
A Novel 2D Standard Cartesian Representation for the Human Sensorimotor Cortex For some experimental approaches in brain imaging, the existing normalization techniques are not always sufficient. This may be the case if the anatomical shape of the region of interest varies substantially across subjects, or if one needs to compare the left and right hemisphere in the same subject. Here we propose a new standard representation, building upon existing normalization methods: Cgrid (Cartesian geometric representation with isometric dimensions). Cgrid is based on imposing a Cartesian grid over a cortical region of interest that is bounded by anatomical (atlas-based) landmarks. We applied this new representation to the sensorimotor cortex and we evaluated its performance by studying the similarity of activation patterns for hand, foot and tongue movements between subjects, and similarity between hemispheres within subjects. The Cgrid similarities were benchmarked against the similarities of activation patterns when transformed into standard MNI space using SPM, and to similarities from FreeSurfer’s surface-based normalization. For both between-subject and between-hemisphere comparisons, similarity scores in Cgrid were high, similar to those from FreeSurfer normalization and higher than similarity scores from SPM’s MNI normalization. This indicates that Cgrid allows for a straightforward way of representing and comparing sensorimotor activity patterns across subjects and between hemispheres of the same subjects. Introduction In functional brain imaging (functional MRI; fMRI), spatial normalization is often applied, where scans are transformed into a common space, so that the same coordinates in different subjects correspond to the homologous anatomical location in the brain. This makes statistics at a group level possible, allowing for the comparison of brain activity patterns between groups of subjects, for example patients and healthy controls. The quality of the normalization is a central determinant of the quality of the group-level statistics (Pizzagalli et al. 2013), making accurate normalization a crucial part of the processing pipeline. To join multiple brain images together for comparison of brain activation between groups (for example patients versus controls) or determining common areas of activation (mapping), several options are available and widely used. One is 3D normalization either using a single image (for example Talairach template), an average of co-registered images from multiple individuals unrelated to the study (for example MNI templates), or an average of study participants themselves (for example DARTEL (Ashburner 2007)). Alternatively, activity can be mapped on an inflated brain, where sulci are projected to a spherical surface or a flattened cortex map (Fischl et al. 1999), both of which allow for subsequent normalization (Qiu and Miller 2007;Van Essen et al. 2001). For certain research questions, the existing techniques for representing brain activity patterns do not suffice, due to the fact that borders between regions (defined by gyral and sulcal patterns) reflect the natural 3D folding patterns of the brain (Pizzagalli et al. 2013). Some applications, for example a quantitative comparison of topographical mapping of sensory and motor functions, would benefit from a representation in the form of a 2D rectangular mesh. This constitutes an easy to interpret and uniform space, and would allow for easy comparison of activation patterns and distances between foci, while accounting for individual differences in the shape and size of sensorimotor cortex. Moreover, such a representation could make cross-hemispheric comparisons more direct and accurate, something which is not possible using existing normalization methods, as they typically do not conduct a registration of the two hemispheres. It also would accommodate a more direct comparison or combination of data from different studies. A two-dimensional, grid-shaped representation has been described for the central sulcus, which was obtained by extraction of a 3D mesh of the central sulcus, which was subsequently reparametrized with the y axis along the direction of the central sulcus, and the x axis along the direction of the sulcal depth (Coulon et al. 2011). Although Coulon's method elegantly maps the sulcus onto a grid, the sensorimotor cortex in fact extends also into the adjacent gyri, which is not included in their approach. Therefore, it is worthwhile transforming the whole pre-and postcentral gyrus into a Cartesian grid. Here, we propose a novel extension to existing methods for standardization of regions in the human brain allowing for quantitative comparisons, which maps the whole gyri to a Cartesian grid: Cgrid (Cartesian geometric representation with isometric dimensions). Cgrid builds upon methods for inflating the cortex, and constitutes imposing a Cartesian grid on the region of interest using anatomical (atlas-based) landmarks. One brain region that seems particularly suitable for transforming into a rectangular mesh are the primary sensory and motor areas (S1 and M1), because of their more or less rectangular shapes with clear top, bottom and side boundaries. Cgrid is therefore first applied and validated on the precentral and postcentral gyrus. This special case is called 'Cgrid-SMX', where SMX stands for 'sensorimotor cortex'. Cgrid is meant to extend upon standard data preprocessing, and adding the possibility to easily compare patterns between subjects and between hemispheres. The presented implementation requires segmentation and atlas-based parcellation in FreeSurfer (Fischl 2012) and flat mapping with Caret (Van Essen et al. 2001), but accommodates any similar method. The Cgrid-SMX mapping was evaluated using data from 20 healthy volunteers who each performed four motor tasks (moving left hand, right hand, feet, and tongue). As activation patterns for these basic motor tasks are expected to be similar across subjects, and within subjects across hemispheres, the similarities of the patterns of activity were calculated as a measure of validity of the transformation. The results were compared to the similarities obtained by SPM's normalization to MNI space (a commonly used normal space) as well as to the similarity of activation patterns after FreeSurfer normalization. This was to provide a benchmark for the performance of our new method. Subjects Twenty healthy volunteers participated in this study (age 26.7 ± 8.8 years, 9 females, all right handed). Subjects had no history of neurological or psychiatric disorders. Data acquisition was approved by the medical-ethical committee of the University Medical Center Utrecht and all subjects gave their written informed consent in agreement with the declaration of Helsinki (World Medical Association 2013). Structural MRI Preprocessing For each subject, the cortical surface was reconstructed from the T1-weighted image using FreeSurfer, and automatically parcellated into ROIs using the Desikan-Killiany atlas (Desikan et al. 2006) (Fig. 1a). Each individual's surface was then flattened using Caret, making sure that the central sulcus was oriented vertically (that is, dorsal aspect at the top, ventral aspect at the bottom, which is necessary for the Cgrid procedure). Definition of the Cgrid Standard Space The flattened cortex was represented as a face-vertex mesh in 2D. Each vertex v has an x-and y-coordinate, v x and v y . Notably, because a flat map is a deformation of a spherical surface, distances on the flat map will not exactly correspond to distances on the brain. Therefore we will consider distances on the flat map to be measured in arbitrary units (a.u.), although 1 a.u. will approximate 1 mm. Each vertex was tagged with the ROI label indicating the underlying Desikan-Killiany atlas region, and L(v) denotes the ROI label of vertex v. The topology describes which vertices are connected to form the faces of the mesh. Let the set of neighboring vertices of vertex v be denoted by Ω v . The first step in defining the Cgrid standard space was the extraction of five anatomical borders. A border B between two ROIs was defined as the set of vertices having ROI label L 1 , while having one or more neighboring vertices with another ROI label L 2 : Three 'vertical borders' (the central sulcus border B cs , the precentral sulcus border B pre and the postcentral sulcus border B post , Fig. 1b) were defined using Eq. 1, where curly brackets indicate that L 2 can be one of the given labels: B pre ¼ B "Precentral gyrus"; "Pars opercularis"; "Caudal middle frontal"; "Superior frontal" Two 'horizontal borders' were defined, constraining the sensorimotor cortex at the dorsal (B dor ) and ventral (B ven ) side: B ven ¼ B "PrecentralGyrus"; "PostcentralGyrus" f g ; "Insula" ð Þ Fig. 1 Applying Cgrid to the sensorimotor cortex. A: Brain parcellation from FreeSurfer. B: Flatmap representation, with the five borders that were extracted using labels from FreeSurfer's cortical parcellation according to Eq. 2-6 (solid lines: "vertical borders" and dashed lines: "horizontal borders"). A vertex was considered to be part of a border if it had a neighboring vertex with another FreeSurfer label. C: 10th-order polynomials were fitted through the three vertical borders, and inbetween vertical curves were created by interpolation between y_min and y_max. Each curve C_i was then truncated using the horizontal dorsal and ventral borders (drawn in red in the inset) by selecting the node points closest to any node on these horizontal borders. D: Truncated vertical curves were divided into vertical segments, resulting in N × M "tiles". To map beta values from statistical maps to Cgrid, a beta value for each tile is calculated by averaging the beta values of vertices inside that tile. E: A Cgrid can be visualized as a rectangular grid, where the central sulcus is the middle, the anterior aspect (A) on the left side, posterior (P) on the right side, ventral (V) at the bottom and dorsal (D) at the top The next step consisted of fitting a 10th order polynomial through each of the three vertical borders. The order 10 was chosen empirically and was found to result in a good balance between capturing the shape of the borders and still allowing for extrapolation, which is needed in a next step. For generating these fits, the vertical coordinate of the vertices (v y , the coordinate on the dorsal-ventral axis) was treated as the independent variable, and the horizontal coordinate (v x , the coordinate on the anterior-posterior axis) as the dependent variable. The vertical curves were resampled and extrapolated such that they ran from y min to y max in unit steps (arbitrary units), thereby making sure that they covered the whole sensorimotor cortex, where y min and y max were defined by: In-between vertical polynomial curves were then created by linear interpolation of each of the 11 polynomial coefficients regularly at M + 1 points, thereby effectively dividing the sensorimotor cortex into M "columns" (Fig. 1c). As each in-between curve C i ran from y min and y max , some of them extended too far outside the sensorimotor cortex. Therefore, they needed to be truncated at the dorsal and ventral borders. Let the X i nodes on the ith interpolated vertical curve C i = {u j | j = 1. . X i }, u = (u x , u y ). Ventral and dorsal cuts for curve C i were defined as the nodes u C i ;med and u C i ;dor on the interpolated curves closest to any point on B ven and B dor , where d(u, v) denotes the Euclidean distance between vertices (v) and nodes on the curve (u): Each curve C i was then divided into N rows segments by resampling C i from u C i ;med to u C i ;dor in 0.1 arbitrary unit steps. For this step, the length of each curve was first estimated by: Each of the vertical curves was then resampled again, where the distances between the nodes equaled l i /N. This resulted in a grid imposed on the sensorimotor cortex, consisting of N rows and M columns, denoted as N × M "tiles". The final step consisted of mapping all vertices from the cortical surface into the newly defined standard space, by treating each tile as a polygon and determining which vertices are enclosed by that polygon. As a result, each vertex was associated with one tile in Cgrid. This association allows for mapping any kind of MRI data to Cgrid space, for example anatomical data, such as cortical thickness, or functional data. This mapping consists of two steps: first, the MRI data needs to be projected onto the cortical surface reconstruction vertices (using tools from the FreeSurfer package). Second, per tile a value (thickness, functional beta, etc.) can be calculated by taking the mean of all vertices for that tile (Fig. 1d). In the Evaluation section, the mapping to Cgrid-SMX space is demonstrated with task-based functional data. By convention, Cgrid visualizations in this paper are displayed (and processed) such that the precentral sulcus border is always on the left, and the postcentral border is always on the right. This means that the left half of the Cgrid images represents the precentral gyrus (M1), and the right part represents the postcentral gyrus (S1), regardless of the hemisphere (Fig. 1e). Evaluation Task-based fMRI activation maps for the 20 subjects were mapped to Cgrid-SMX. Activation patterns were generated for four movement tasks (see 'Task design', below). Cgrid-SMX space was evaluated by calculating the within-subject (left-right) and between-subject similarities of activation patterns in Cgrid space. For this, a Pearson correlation between Cgrid-SMX activation patterns was used. To benchmark the results, Cgrid-SMX pattern similarities were then compared to within-and between-subject pattern similarities in MNI space from SPM. We focused on four regions of interest (ROIs): left M1, left S1, right M1, and right S1. Task Design Subjects executed four separate movement tasks: following a visual cue, subjects were instructed to move their right hand ("Hand-Right task", opening and closing), their left hand ("Hand-Left task", opening and closing), their tongue ("Tongue task", moving from left to right), or both feet ("Feet task", rotating both feet about the ankle simultaneously). Each task was set up as a block design, with pseudorandom block durations ranging from 15 to 45 s followed by rest blocks ranging from 15 to 45 s. Cgrid Activation Maps Task data was slice-time corrected, realigned and coregistered to the subject's anatomical scan to correct for movements using SPM12 (http://www.fil.ion.ucl.ac.uk/spm/). A GLM analysis with one regressor for movement was applied to the task data using the contrast 'movement versus baseline', resulting in one statistical map (beta map) per task. These beta maps were then projected onto the cortical surface reconstruction vertices using FreeSurfer (with projection fraction 0.5 and a smoothing of 6 mm FWHM). A beta value was then computed per tile by taking the mean of the beta values for all vertices within that tile. This resulted in beta maps in Cgrid-SMX space for each of the four ROIs. MNI Activation Maps To benchmark the performance of Cgrid space, functional scans were also normalized to MNI for all subjects using SPM12, and likewise smoothed with 6 mm FWHM Gaussian kernel. After normalization and smoothing, a GLM with one regressor for movement was fit to the task data and statistical maps were created using the contrast 'movement versus baseline'. Four ROI masks in MNI space (left M1, left S1, right M1, and right S1) were initially taken from the Brainnetome Atlas (Fan et al. 2016). Since the method of calculating similarities between hemispheres requires left and right ROIs to be symmetrical, the right M1 was flipped to the left hemisphere, and combined with left M1 (voxel-wise union). The resulting ROI was then flipped back to the right hemisphere. The same was done for S1. The resulting ROIs were used to mask the beta map and obtain activity patterns for the four tasks in each of the four ROIs. Within-Subject Pattern Similarity (Left-Right) As the Cgrid-SMX space is expected to minimize anatomical differences between the left and right motor cortex, left and right activation patterns should demonstrate high similarity within subjects. For the Feet task and Tongue task, the similarity between left and right Cgrid patterns was calculated using Pearson correlation. For the hand tasks, the correlation between contralateral activation patterns was calculated, that is: the similarity between the left pattern from the Hand-Right task and the right pattern from the Hand-Left task. All Pearson correlations were transformed to 'similarity (z-)scores' using the Fisher z-transform (which is equal to the hyperbolic function arctanh), to allow averaging and statistical testing across subjects. The 6 similarity scores for each subject (Tongue, Hand and Feet for M1 and S1) were then averaged per subject over ROIs and tasks to obtain a single within-subject (leftright) similarity per subject for Cgrid. Similarity scores can be transformed back to (group-averaged) correlations using the inverse Fisher z-transform (the hyperbolic function tanh). Similarity scores for MNI space were calculated similarly, and differences in similarity scores between Cgrid-SMX and MNI space were assessed using a paired-samples t-test. Between-Subject Pattern Similarity To assess between-subject pattern similarity, a per-subject similarity score was calculated using a leave-one-out approach, where a pattern of the subject under investigation was correlated with the mean patterns of the other subjects. This resulted in similarity scores per task and ROI for every subject, which were then averaged to obtain a mean similarity score per subject. The same approach was applied to the patterns in MNI space, and a paired-samples t-test was conducted to compare the between-subject similarity scores for Cgrid and MNI space. Since MNI is a 3D space and Cgrid is a 2D space, the differences in the dimensionality of the approaches might bias the performance. FreeSurfer includes surface based normalization through spherical registration, using the FS-average as template. All subjects were normalized using this approach. Then, activation patterns in FS-average space were extracted by selecting the beta values in the nodes of the pre-and postcentral gyrus. A between-subject similarity was calculated per subject following the same scheme as for the Cgrid and MNI, using a leave-one-out approach. Effect of Smoothing on between-Subject Correlations For the within-and between-subject similarities, a Gaussian smoothing kernel of 6 mm FWHM was used. However, since the impact of a smoothing kernel can be different between Cgrid (2D space) and MNI space (3D), we tested the effect of the smoothing kernel on the similarities. This was done by repeating the between-subject analysis described above, using different smoothing kernels both in MNI space and on the cortical surface in the Cgrid pipeline (see above). Kernel sizes of 4, 6, 8, 10, 12, 18, 25 and 35 mm FWHM were used. A two-way repeated measures ANOVA was conducted to compare the effects of method and smoothing kernel size on the between-subject similarity score. Defining Cgrid Space Surfaces reconstructions of all 20 subjects were generated using FreeSurfer. The five borders (central sulcus, precentral sulcus, postcentral sulcus, ventral border, and dorsal border) were extracted and visual inspection of the fitted curves confirmed that a 10th order polynomial fit was sufficient to capture the shape of the borders accurately in all subjects. A Cgrid standard space was defined and resulted in a 28 × 84 tiled mesh per hemisphere in all subjects. A tile covered 2.62 ± 0.71 mm 2 (mean ± sd) and contained 6 ± 1 vertices. On average, 21 ± 10 tiles (1.8% of all tiles) did not contain any vertices that were labelled as being part of the sensorimotor cortex; these tiles were mostly located at the edges of the Cgrid and were excluded from the correlation analyses. Mapping Beta Maps to Cgrid Space Volumetric statistical group maps of the tasks showed sensorimotor activation in distinctive foot, hand, and tongue areas (see Fig. 2). The feet and tongue tasks activated both the left and right sensorimotor cortex. There was no excessive motion (mean absolute translation over all subjects and tasks: 0.17 ± 0.10 mm; mean rotation: 2.8 × 10 −3 ± 2.3 × 10 −3 degrees). Visual inspection of the resulting Cgrid group-mean activation maps, averaged over subjects, confirmed that Cgrid was capable of capturing the different activation hotspot patterns associated with movement of the respective body parts (Fig. 3). Feet activation was located at the dorsal side of the sensorimotor cortex, tongue activation was located towards the ventral side, and hand activation was located mostly contralaterally at approximately 1/3 of the dorsal-ventral axis. Average activation hotspots for all tasks were mostly located within the central sulcus. Whereas the group average of Cgrid patterns demonstrated strong hotspot-like activation, task activation patterns per individual did not necessarily consist of only a single hotspot, but were sometimes complex patterns, varying somewhat across subjects (Fig. 4). Within-Subject Pattern Similarity (Left-Right) The similarities between left and right hemispheric patterns within subjects from feet, hand, and tongue tasks were computed using Fisher z-transformed Pearson correlations for both Cgrid and MNI space. A second-level paired t-test demonstrated a significantly higher similarity in Cgrid (Fisher Z = 0.80 ± 0.09, mean ± standard deviation) than in MNI space (Fisher Z = 0.67 ± 0.08); t(19) = 6.70, p < 0.001 (Fig. 5a). Between-Subject Pattern Similarity The similarity of patterns between subjects was calculated per task and per ROI using Pearson correlations using a leaveone-out approach. A paired t-test demonstrated a significantly higher correlation in Cgrid (Fisher Z = 0.92 ± 0.09) than in MNI space (Fisher Z = 0.84 ± 0.16); t(19) = 8.25, p < 0.001 (Fig. 5b). Effect of Smoothing on between-Subject Correlations Calculating between-subject similarities with different smoothing kernels resulted in higher similarity scores with larger smoothing kernels for both Cgrid-SMX and MNI space (Fig. 7). A two-way repeated measures ANOVA showed a significant effect of method on the between-subject similarity score, indicating that Cgrid similarities are higher than similarities in MNI space for all smoothing kernel sizes. Discussion We introduce Cgrid-SMX as a Cartesian representation of the sensorimotor cortex, based on anatomical atlas-based landmarks and building upon existing data processing methods. Cgrid imposes a grid on the sensorimotor areas, thereby effectively transforming them into a rectangular, tiled mesh. Cgrid was successfully applied to 20 healthy subjects on both the left and right hemisphere. Results of comparing sensorimotor activity patterns between individuals and between hemispheres yielded high similarity scores, exceeding those obtained with analysis of the same data in MNI space, but equal to similarity scores calculated in FreeSurfer space. Nevertheless, these findings indicate that Cgrid yields a representation that allows for a straightforward way of comparing activity patterns in sensorimotor cortex, which performs at least as good as representations from the more standard FreeSurfer and MNI approaches in terms of pattern similarities. Transforming regions of the brain into a grid-like representation has also been reported in literature. It has been applied to the visual cortex, based on statistical modelling of the Fig. 2 Group activation map of the movement tasks (contrasts used: Feet > baseline, Tongue > baseline, Left hand > baseline, and Right hand > baseline). Contrasts are displayed on a standard MNI brain with threshold t > 8 borders using visual stimuli (Corouge et al. 2004). Also the central sulcus has been transformed into a 2D grid mesh (Coulon et al. 2011), and even the whole cortex has been parametrized using the alignment of sulci . However, there are some key differences between these approaches and Cgrid. First, the method described by Coulon only covers the cortex inside the central sulcus, whereas our method maps the surface of the whole gyrus. Second, the Cgrid method is described in such a way that it can be applied on any brain region, as long as clear borders can be defined. It . 1b). Note that for all Cgrid-SMXs the left side is anterior in the brain does not statistically model the borders, but rather extracts them from existing atlases. This makes Cgrid a versatile tool, since it is easy to select a different set of borders if desired. Third, the simple geometry of the Cgrids allows for an easy to interpret visualization, which was one of the goals for the development of Cgrid. The validity of using Cgrid was confirmed by multiple findings. First, analysis of Cgrid-transformed group-averaged activity patterns associated with movement (feet, left hand, right hand, and tongue) resulted in focal activation hotspots. The location of these hotspots allowed for a clear differentiation between the studied motor functions and preserved the topographical distinction between body parts, according to what is known from literature: feet activity was located near the medial wall, tongue activity was bilaterally located in the ventral sensorimotor area, and hand activation was located about halfway the dorsal-ventral axis, mainly on the contralateral hemisphere. Second, as Cgrid is designed as a representation accounting for anatomical differences, we expected a high similarity between the left and right Cgrid activation patterns within a subject, and also a high similarity between Cgrid activation patterns across subjects. Indeed, averaged over tasks and ROIs, similarity scores were high both for within-(Fisher Z = 0.8, corresponding to a Pearson correlation of R = 0.66) and between-subject (Fisher Z = 0.97, R = 0.75) comparisons. This indicates that there is a good correspondence of the functional localization in Cgrid both between left and right cortex within subjects, and between subjects, supporting the utility of the common space transformation. Finally, we compared the within-subject similarities and between-subject similarities of Cgrid activation patterns to those in MNI space, as this is the most widely used standard space for normalization. When benchmarking Cgrid activity patterns against those from MNI, both within-and between-subject similarities were higher for Cgrid than for MNI space. It should be noted, however, that the comparison of these two methods should be taken with some caution. First, different spaces (2D flat map and 3D MNI volume) required the use of different atlases. The Desikan-Killiany atlas is provided with FreeSurfer and is the atlas from which borders for Cgrid are detected, but this atlas has been developed for surface-based analysis and can therefore not be used in 3D volumes. While a volumetric version of the Desikan-Killiany atlas exists, it only labels the grey matter voxels of the FreeSurfer average, rendering it unsuitable as an atlas for SPM volumetric normalization. Although the use of different atlases is not optimal, the labels used by these two different atlases (precentral and postcentral) indicate highly similar brain areas. Any difference in results that originates from differences in labels would be small. Second, although smoothing kernels with the same sizes were used in both Cgrid and MNI, the effect of smoothing may differ, as in Cgrid smoothing was done in 2D on the surface, and in Fig. 5 A: Within-subject similarities (averaged over tasks and hemispheres) per subject, for Cgrid (red dots) and MNI space (blue dots). Similarities in Cgrid space were significantly higher than in MNI space. B: Between-subject similarities (averaged over tasks and hemispheres) per subject, for Cgrid (red dots) and MNI space (blue dots). Similarities in Cgrid space were significantly higher than in MNI space Fig. 7 Between-subject correlations (averaged over tasks and ROIs) as a function of smoothing kernel size. The dashed line indicated the kernel size used for smoothing in both the Cgrid-SMX and MNI space analyses throughout the text (6 mm FWHM) Fig. 6 Between-subject similarities in Cgrid-SMX and in the FreeSurfer normalized space (FS-average). There was no significant difference in similarity scores between the two methods MNI in 3D on the whole volume. Smoothing in 3D can possibly also include signals from for example white matter, or even from areas that are relatively remote when measures across the surface of the cortex, but proximate in 3D space. Comparison of the two normalization methods over a wide range of smoothing kernels, however, revealed that correlations were generally higher in Cgrid than in MNI space, even with larger kernels. Third, calculating a similarity between patterns from both hemispheres in MNI space was only possible when mirroring the masks for the somatosensory cortex across the longitudinal fissure. This is because for the correlation, left and right ROIs need to be symmetrical (with the same number of voxels and same spatial configuration), which is not necessarily the case in an atlas. Therefore, we mirrored the ROIs, although this does not yield an ROI that is perfectly anatomically aligned and possibly affects the correlation between left and right. Note that the limitation in this approach reflects one of the advantages of Cgrid, where coordinates within the left and the right hemisphere are automatically matched. Forth and finally, the comparison between MNI and Cgrid was performed only using the default settings for normalization in SPM12, and therefore indicate that Cgrid yields higher pattern similarities than normalization to MNI space in a commonly used implementation. Results of the comparison might differ when alternative settings are used. However, the aim was not to optimize the MNI normalization, but to provide a benchmark that reflects a well-known and commonly used normalization method. In testing validity of the Cgrid approach it is assumed that the topographical organization of the sensorimotor cortex is in proportion to its shape. This means that even if the absolute location of an activity hotspot differs from one subject to another, the hotspot's relative location-that is, the location relative to the dimensions of the sensorimotor cortex-is assumed to be the same across subjects. Likewise, this assumption applies also to the left versus the right hemisphere. Cgrid exploits this postulated relative organization of the sensorimotor cortex, and effectively places the sensorimotor cortex of each individual in a proportional space. As a result, the anatomical differences between subjects are discounted for, as well as differences between the left and right sensorimotor cortex. Subjects displayed some variations in not only the magnitude and location of activity, but also in the extent of activation along the sensorimotor cortex (compare for example the tongue activity on the right hemisphere in subjects 4 and 6). These differences may reflect variations in cortical representation, but may also well reflect differences in how tasks (even simple tasks) are performed. The calculated similarity scores are derived from Pearson correlations of the complete Cgrid pattern, and thus include areas that should not activate during the task. This makes this measure sensitive to engagement of additional body parts in a given task. Cgrid employs several cumulative preprocessing steps that may increase the chances of biasing results for individual subjects. It is however difficult to evaluate on theoretical grounds the impact of each individual processing step and its interaction with the other steps. Similarities from Cgrid representations were compared to other methods for brain normalization, where biases should have similar effects. If individual results would be excessively biased, such bias would negatively impact the similarity across subjects, and our method would perform worse than the others, which was not the case. Given a flattened surface reconstruction, the Cgrid method is automatic. We used Caret to generate these, which requires some manual steps, but this could be automated as well. Although the current implementation of the mapping is fully automatic, manual adjustments on the procedure may be needed in cases where the integrity of gyri and sulci is compromised, for example in patients suffering from brain atrophy or lesions. An algorithm monitoring the deviation of precentral and postcentral borders with respect to the central sulcus could be devised to notify the user if a manual adjustment is needed. Cgrid is particularly suitable for studying activity patterns on the left and right sensorimotor cortex within subjects, and for the comparison of groups of subjects (for example healthy and diseased), as well as for longitudinal studies on for example normal development or disease-related processes, where it can be used to quantify and visualize changes in activation hotspots over time. It might be less beneficial in cases where very detailed patterns in individual subjects are studied, as transformation of these patterns could be disruptive. Advantages of Cgrid are that it provides a clear, easy to interpret and consistent representation of the sensorimotor cortex. It allows for a straightforward comparison of activation patterns between groups of subjects, but also for quantification of possible alterations (for example shifts and focality) in activation patterns in longitudinal studies, for example in the areas of development, progressive disease or plasticity (Bruurmijn et al. 2017). As the sensorimotor cortex for each individual is mapped onto the same space, Cgrid allows for comparing whole activity patterns at once, even if they consist of multiple distributed hotspots. In principle, the Cgrid approach can be extended to other primary anatomical regions, and perhaps even to associative cortex where topography is less consistent. Moreover, Cgrid allows for mapping of any cortical parameter, and can accommodate weighing of tile values by the number of included vertices to better represent their quantity where relevant. In conclusion, we present a Cartesian representation of the anatomical sensorimotor cortex in humans, with the aim to facilitate quantitative comparisons of brain activity within and between subjects and visualize results. Results of data from 20 subjects show that the Cgrid performs equal or better than comparisons in MNI space, while carrying the benefit of enabling spatial quantitative comparisons of activity patterns. Information Sharing Statement The Cgrid method has been put into a toolbox and can be downloaded from https://github.com/mathijsraemaekers/ Cgrid-toolbox. The ethics protocol limits data publication from a public repository, but does allow data sharing upon request. Please contact the corresponding author.
7,667.8
2019-12-03T00:00:00.000
[ "Biology", "Psychology" ]
Synthesis and Antimicrobial Activity of δ-Viniferin Analogues and Isosteres The natural stilbenoid dehydro-δ-viniferin, containing a benzofuran core, has been recently identified as a promising antimicrobial agent. To define the structural elements relevant to its activity, we modified the styryl moiety, appended at C5 of the benzofuran ring. In this paper, we report the construction of stilbenoid-derived 2,3-diaryl-5-substituted benzofurans, which allowed us to prepare a focused collection of dehydro-δ-viniferin analogues. The antimicrobial activity of the synthesized compounds was evaluated against S. aureus ATCC29213. The simplified analogue 5,5′-(2-(4-hydroxyphenyl)benzofuran-3,5-diyl)bis(benzene-1,3-diol), obtained in three steps from 4-bromo-2-iodophenol (63% overall yield), emerged as a promising candidate for further investigation (MIC = 4 µg/mL). Introduction Resveratrol-derived natural products, belonging to the class of polyphenolic stilbenes, have increasingly attracted the attention of the scientific community because of their diverse biological activities and intriguing molecular architectures [1][2][3]. Nonetheless, the growing interest in the pharmacological potential of this class of molecules derives from the poor understanding of the in vivo mechanisms of action of their parent compound resveratrol, which severely limits its therapeutic use [4] and the necessity to increase its low bioavailability and in vivo stability. Over the last years, several efforts were made towards the synthesis of complex natural resveratrol oligomers, by biomimetic and de novo approaches [1,[5][6][7][8][9]. However, only few research groups have focused on the synthesis of new resveratrol-derived chemical scaffolds with improved pharmacodynamics and pharmacokinetics with respect to the natural precursors [6,[10][11][12][13][14]. In this scenario, we planned to set up a versatile and efficient synthetic strategy for the construction of dimeric resveratrol-derived benzofurans. Benzo [b]furan-containing molecules, present in numerous bioactive natural compounds, have been extensively studied because of their wide array of biological activities, including anticancer, antimicrobial, immunomodulatory, antioxidant, and anti-inflammatory properties [15][16][17][18]. It is noteworthy that, in the last years, the benzofuran motif has been revealed to be a pharmacophore of choice for the design of new antimicrobial agents [19,20]. We have recently reported the synthesis and the antimicrobial activity evaluation of a collection of resveratrol-derived monomers (i.e., resveratrol, pterostilbene, and piceatannol) and dimers (i.e., trans-δ-viniferin, transε-viniferin, pallidol, dehydro-δ-viniferin, and viniferifuran) against a series of foodborne pathogens [21]. Thus, we planned to prepare a novel set of dehydro-δ-viniferin analogues and isosteres, obtained by modifying the styryl moiety A (Figure 1), while maintaining the unaltered rings B and C. In particular, a removal of the double bond or its replacement with moieties such as an amide, alkyne or a saturated chain, could clarify the role of the geometry and stereoelectronic effects for the antimicrobial activity. In addition, we planned to synthesize dehydro-δ-viniferin analogues that maintained the stilbene double bond, having, however, aromatic rings that were different from the resorcinol moiety. In this perspective, we needed a versatile strategy to construct the 2,3-diaryl benzofuran ring bearing on C-5 a proper functional group (X) for the insertion of the appropriate fragment ( Figure 2). Among the various methods to access stilbenoid-derived 2,3-diaryl-5-substituted benzofurans [23][24][25][26][27][28], palladium catalysed reactions have proven to be rapid and convenient. In particular, an efficient one-pot method developed by Cacchi and coworkers [29] and successively implemented by Markina and coworkers [30], involves a Sonogashira coupling between an ortho-iodophenol and an aryl-substituted terminal alkyne to generate, at room temperature, the corresponding internal alkyne. The alkynylphenol obtained as an intermediate undergoes a simultaneous cyclization with the A previous SAR study performed by our group on simplified analogues of 1 (compounds 2, 3, 4) [22], which were obtained by the selective removal of the moieties linked in positions two, three, and five of the benzofuran core, showed that none of the structurally simplified compounds resulted to be more active than the precursor (Figure 1). In particular, a drastic drop of the antibacterial activity, due to the fatal lack of ring B, was observed for the derivative 3 (MIC value of 743 µM against 4.42 µM of dehydro-δ-viniferin), thus suggesting the fundamental role of the aryl ring in position three of the benzofuran core. An important loss of antimicrobial activity, albeit to a lesser extent, was observed for compounds 2 and 4, obtained by the removal of the styryl group at position five and of the aryl ring in position two, respectively (MIC values of 50.3 µM (2) and 44.5 µM (4), vs. 4.42 µM (1)) ( Figure 1). Thus, we planned to prepare a novel set of dehydro-δ-viniferin analogues and isosteres, obtained by modifying the styryl moiety A (Figure 1), while maintaining the unaltered rings B and C. In particular, a removal of the double bond or its replacement with moieties such as an amide, alkyne or a saturated chain, could clarify the role of the geometry and stereoelectronic effects for the antimicrobial activity. In addition, we planned to synthesize dehydro-δ-viniferin analogues that maintained the stilbene double bond, having, however, aromatic rings that were different from the resorcinol moiety. In this perspective, we needed a versatile strategy to construct the 2,3-diaryl benzofuran ring bearing on C-5 a proper functional group (X) for the insertion of the appropriate fragment ( Figure 2). Among the various methods to access stilbenoid-derived 2,3-diaryl-5-substituted benzofurans [23][24][25][26][27][28], palladium catalysed reactions have proven to be rapid and convenient. In particular, an efficient one-pot method developed by Cacchi and coworkers [29] and successively implemented by Markina and coworkers [30], involves a Sonogashira coupling between an ortho-iodophenol and an aryl-substituted terminal alkyne to generate, at room temperature, the corresponding internal alkyne. The alkynylphenol obtained as an intermediate undergoes a simultaneous cyclization with the adjacent phenol group and an oxidative addition with the aryl-iodide-palladium complex with CuI, in acetonitrile at adjacent phenol group and an oxidative addition with the aryl-iodide-palladium complex with CuI, in acetonitrile at 100 °C, under microwave irradiation. Using this approach, we obtained C5-substituted 2,3-diarylbenzofurans in a three-component one-pot reaction in 48-72% yields. Specifically, we generated the bromo functionalized intermediate 8 by reaction of 4bromo-2-iodophenol 5, 4-ethynylanisole 6 and 3,5-dimethoxy-1-iodobenzene 7 (Scheme 1). Compound 8 underwent a Suzuki-coupling with (3,5-dimethoxyphenyl)boronic acid with Pd(PPh3)4 and aqueous 1 M Cs2CO3 in a mixture DMF/EtOH (1:1), under microwave irradiation, for 20 min at 120 °C [30] to afford compound 9 in 91% yield. Final demethylation with BBr3 provided 10, as a simplified analogue of our hit compound 1, lacking the stilbene double bond. Then, we focused on the synthesis of the isosteres bearing an amide in place of the double bond. Amide isosteres of resveratrol have shown activity similar to the parent compound [31]. The amide linkage should allow to maintain the transoid architecture of the trans-stilbene, conferring however improved solubility and increased polarity [32,33] as well as differences in electronic perturbations [32,33]. Therefore, analogue 15 was synthesized (Scheme 2). The Sonogashira/Cacchi type cyclization of the commercially available methyl 4-hydroxy-3-iodobenzoate 11, 4-ethynylanisole 6 and 3,5-dimethoxy-1iodobenze 7 gave the desired benzofuran 12 in 66% yield. Hydrolysis of the ester 12 was performed with LiOH•H2O in a mixture of THF/water 1:1 for 24 h. The resulting carboxylic acid 13 was reacted with 3,5-dimethoxyaniline, in presence of EDC•HCl and HOBt, to give amide 14, which was demethylated with BBr3 to afford compound 15 in 73% yield. adjacent phenol group and an oxidative addition with the aryl-iodide-palladium complex with CuI, in acetonitrile at 100 °C, under microwave irradiation. Using this approach, we obtained C5-substituted 2,3-diarylbenzofurans in a three-component one-pot reaction in 48-72% yields. Then, we focused on the synthesis of the isosteres bearing an amide in place of the double bond. Amide isosteres of resveratrol have shown activity similar to the parent compound [31]. The amide linkage should allow to maintain the transoid architecture of the trans-stilbene, conferring however improved solubility and increased polarity [32,33] as well as differences in electronic perturbations [32,33]. Therefore, analogue 15 was synthesized (Scheme 2). The Sonogashira/Cacchi type cyclization of the commercially available methyl 4-hydroxy-3-iodobenzoate 11, 4-ethynylanisole 6 and 3,5-dimethoxy-1iodobenze 7 gave the desired benzofuran 12 in 66% yield. Hydrolysis of the ester 12 was performed with LiOH•H2O in a mixture of THF/water 1:1 for 24 h. The resulting carboxylic acid 13 was reacted with 3,5-dimethoxyaniline, in presence of EDC•HCl and HOBt, to give amide 14, which was demethylated with BBr3 to afford compound 15 in 73% yield. Then, we focused on the synthesis of the isosteres bearing an amide in place of the double bond. Amide isosteres of resveratrol have shown activity similar to the parent compound [31]. The amide linkage should allow to maintain the transoid architecture of the trans-stilbene, conferring however improved solubility and increased polarity [32,33] as well as differences in electronic perturbations [32,33]. Therefore, analogue 15 was synthesized (Scheme 2). The Sonogashira/Cacchi type cyclization of the commercially available methyl 4-hydroxy-3-iodobenzoate 11, 4-ethynylanisole 6 and 3,5-dimethoxy-1iodobenze 7 gave the desired benzofuran 12 in 66% yield. Hydrolysis of the ester 12 was performed with LiOH·H 2 O in a mixture of THF/water 1:1 for 24 h. The resulting carboxylic acid 13 was reacted with 3,5-dimethoxyaniline, in presence of EDC·HCl and HOBt, to give amide 14, which was demethylated with BBr 3 to afford compound 15 in 73% yield. The ester 12 was envisaged as a versatile intermediate for the preparation of a set of dehydro-δ-viniferin derivatives, differently substituted on ring A (Scheme 3). Reduction with LiAlH4 gave, quantitatively, compound 16, which was converted into the corresponding bromide derivative with PBr3. Reaction with triethyl phosphite at 130 °C overnight, which afforded the phosphonate 17 in 80% yield over two steps. The HWE reaction with 4-methoxybenzaldehyde provided the desired stilbene 18, only as a trans isomer, in 86% yield. Unfortunately, attempts to deprotect the methyl groups with BBr3 at −78 °C in dry DCM, following the usual procedure, gave only degradation products. Several troublesome efforts in the demethylation process confirmed that this step is an Achilles' heel in the synthesis of stilbenoids-derived compounds [6,10,14,22]. Methyl groups are convenient protecting groups for phenolic moieties because of the availability of their starting reagents and their high stability to a wide variety of reaction conditions. However, as a not-negligible drawback, their high robustness requires harsh conditions in the deprotection step, often resulting in poor yields and product degradation in the presence of highly reactive double bonds [5,6,10,22]. As stilbenoids are known to form dimers and polymers with a variety of acids, including BBr3 [34,35], alternative protocols were investigated. We first attempted to obtain the desired compound 19 by the initial deprotection of bromoderivative 8, followed by a direct insertion of the p-hydroxystyryl moiety via the Heck reaction. However, the reaction gave a mixture of 19 and its isomer 20, coeluted in column chromatography (Scheme 3). In another synthetic route, 2-iodo-4-methylphenol 22, prepared in excellent yields from para-cresol (21) with N-iodosuccinimide and para-toluenesulfonic acid in acetonitrile [36], was used as the starting material (Scheme 4). In the one-pot-Sonogashira-Cacchi reaction conditions, the obtained intermediate gave the desired benzofuran derivative 23 in 48% yield. Intermediate 23 was smoothly demethylated to afford compound 24 in 90% yield. The protection of hydroxy groups with tert-butyldimethylsilylchloride and imidazole was performed in 1,2-dichloroethane at 60 °C, to give compound 25 in a good The ester 12 was envisaged as a versatile intermediate for the preparation of a set of dehydro-δ-viniferin derivatives, differently substituted on ring A (Scheme 3). Reduction with LiAlH 4 gave, quantitatively, compound 16, which was converted into the corresponding bromide derivative with PBr 3 . Reaction with triethyl phosphite at 130 • C overnight, which afforded the phosphonate 17 in 80% yield over two steps. The HWE reaction with 4-methoxybenzaldehyde provided the desired stilbene 18, only as a trans isomer, in 86% yield. Unfortunately, attempts to deprotect the methyl groups with BBr 3 at −78 • C in dry DCM, following the usual procedure, gave only degradation products. Several troublesome efforts in the demethylation process confirmed that this step is an Achilles' heel in the synthesis of stilbenoids-derived compounds [6,10,14,22]. Methyl groups are convenient protecting groups for phenolic moieties because of the availability of their starting reagents and their high stability to a wide variety of reaction conditions. However, as a not-negligible drawback, their high robustness requires harsh conditions in the deprotection step, often resulting in poor yields and product degradation in the presence of highly reactive double bonds [5,6,10,22]. As stilbenoids are known to form dimers and polymers with a variety of acids, including BBr 3 [34,35], alternative protocols were investigated. We first attempted to obtain the desired compound 19 by the initial deprotection of bromoderivative 8, followed by a direct insertion of the p-hydroxystyryl moiety via the Heck reaction. However, the reaction gave a mixture of 19 and its isomer 20, coeluted in column chromatography (Scheme 3). In another synthetic route, 2-iodo-4-methylphenol 22, prepared in excellent yields from para-cresol (21) with N-iodosuccinimide and para-toluenesulfonic acid in acetonitrile [36], was used as the starting material (Scheme 4). In the one-pot-Sonogashira-Cacchi reaction conditions, the obtained intermediate gave the desired benzofuran derivative 23 in 48% yield. Intermediate 23 was smoothly demethylated to afford compound 24 in 90% yield. The protection of hydroxy groups with tert-butyldimethylsilylchloride and imidazole was performed in 1,2-dichloroethane at 60 • C, to give compound 25 in a good yield (86%) [8]. Then, a radical bromination of the methyl group with NBS and AIBN as a radical initiator at reflux in CCl 4 gave a brominated intermediate, which was converted into the corresponding phosphonate 26 with triethyl phosphite at 130 • C (84% yield). The intermediate 26 was reacted with the properly protected 3,4-bis((tert-butyldimethylsilyl)oxy)benzaldehyde in presence of LDA in THF in 16% yield. The use of NaH increased the yield to 52%. Eventually, the deprotection of silyl groups was performed with tetrabutylammonium fluoride (TBAF) at 0 • C in THF, to afford compound 28 with a catechol on the styryl moiety (60% yield). The protection of phenol groups as t-butyldimethylsilylethers was applied also to the synthesis of the alkyne derivative 32 (Scheme 5). The high-yield demethylation of brominated intermediate 8 was thus followed by protection of the hydroxy groups as tert-butyldimethylsilyl ethers (28). synthesis of the alkyne derivative 32 (Scheme 5). The high-yield demethylation of brominated intermediate 8 was thus followed by protection of the hydroxy groups as tertbutyldimethylsilyl ethers (28). The alkyne 31 was obtained starting from 3,5-dihydroxybenzaldehyde 29, which was properly protected and then subjected to Corey-Fuchs conditions [37] to give the terminal dibromoalkene 30, which underwent lithium-halogen exchange and α-elimination with LDA to afford 31 in excellent yield. The final Sonogashira coupling was performed with Pd(PPh3)4 and CuI in triethylamine at reflux for 8 h. The crude compound obtained was directly deprotected with KF to give the desired alkyne 32 in 38% yield, over two steps. Finally, compound 33, having a saturated chain in place of the stilbene double bond, was obtained in a quantitative yield by the hydrogenation of dehydro-δ-viniferin 1 with Pd/C in ethanol at room temperature for 3 h (Scheme 6). Hydrogenation of δ-viniferin 34, applying the same protocol, led to a dihydrobenzofuran ring cleavage (compound 35) [38]. The alkyne 31 was obtained starting from 3,5-dihydroxybenzaldehyde 29, which was properly protected and then subjected to Corey-Fuchs conditions [37] to give the terminal dibromoalkene 30, which underwent lithium-halogen exchange and α-elimination with LDA to afford 31 in excellent yield. The final Sonogashira coupling was performed with Pd(PPh 3 ) 4 and CuI in triethylamine at reflux for 8 h. The crude compound obtained was directly deprotected with KF to give the desired alkyne 32 in 38% yield, over two steps. Finally, compound 33, having a saturated chain in place of the stilbene double bond, was obtained in a quantitative yield by the hydrogenation of dehydro-δ-viniferin 1 with Pd/C in ethanol at room temperature for 3 h (Scheme 6). Hydrogenation of δ-viniferin 34, applying the same protocol, led to a dihydrobenzofuran ring cleavage (compound 35) [38]. dibromoalkene 30, which underwent lithium-halogen exchange and α-eli LDA to afford 31 in excellent yield. The final Sonogashira coupling was performed with Pd(PPh3)4 triethylamine at reflux for 8 h. The crude compound obtained was direct with KF to give the desired alkyne 32 in 38% yield, over two steps. Finally, compound 33, having a saturated chain in place of the stilbene was obtained in a quantitative yield by the hydrogenation of dehydro-δ-v Pd/C in ethanol at room temperature for 3 h (Scheme 6). Hydrogenation of applying the same protocol, led to a dihydrobenzofuran ring cleavage ( [38]. the reduction of the double bond (compounds 33; MIC 2 µg/mL) and the replacement with the triple bond (compound 32; MIC 4 µg/mL), gave compounds which maintained a significant activity. Conversely, the replacement of the double bond with an amide group (compound 15) was deleterious (MIC 16 µg/mL). Also, the replacement of ring A with a catechol was not successful in terms of activity, as compound 27 had a MIC of 16 µg/mL. Compound 35, obtained by opening the benzofuran system, showed a very high MIC (256 µg/mL). This result confirmed that the heterocyclic ring plays an essential role for antimicrobial activity. the reduction of the double bond (compounds 33; MIC 2 µg/mL) and the replacement with the triple bond (compound 32; MIC 4 µg/mL), gave compounds which maintained a significant activity. Conversely, the replacement of the double bond with an amide group (compound 15) was deleterious (MIC 16 µg/mL). Also, the replacement of ring A with a catechol was not successful in terms of activity, as compound 27 had a MIC of 16 µg/mL. Compound 35, obtained by opening the benzofuran system, showed a very high MIC (256 µg/mL). This result confirmed that the heterocyclic ring plays an essential role for antimicrobial activity. the reduction of the double bond (compounds 33; MIC 2 µg/mL) and the replacement with the triple bond (compound 32; MIC 4 µg/mL), gave compounds which maintained a significant activity. Conversely, the replacement of the double bond with an amide group (compound 15) was deleterious (MIC 16 µg/mL). Also, the replacement of ring A with a catechol was not successful in terms of activity, as compound 27 had a MIC of 16 µg/mL. Compound 35, obtained by opening the benzofuran system, showed a very high MIC (256 µg/mL). This result confirmed that the heterocyclic ring plays an essential role for antimicrobial activity. the triple bond (compound 32; MIC 4 µg/mL), gave compounds which maintained a significant activity. Conversely, the replacement of the double bond with an amide group (compound 15) was deleterious (MIC 16 µg/mL). Also, the replacement of ring A with a catechol was not successful in terms of activity, as compound 27 had a MIC of 16 µg/mL. Compound 35, obtained by opening the benzofuran system, showed a very high MIC (256 µg/mL). This result confirmed that the heterocyclic ring plays an essential role for antimicrobial activity. Materials and Methods Synthesis. All chemicals used were of analytical grade. Procedures for the synthesis and characterization data for the various derivatives and intermediates are detailed in the Supplementary Materials. Determination of minimum inhibition concentration (MIC) and minimum bactericidal concentration (MBC). The minimum inhibition concentration (MIC) of compounds was determined for S. aureus ATCC29213. The concentration range of the compounds were 0.25-512 µg/mL. Tobramycin (T2503, TCI Europe N.V) was used as a control with a concentration range of 0.5-64 µg/mL. One colony of S. aureus was inoculated in 5 mL growth media and incubated overnight in a water bath at 37 °C, 180 rmp. Three biological replicas were used. The overnight cultures were diluted 1:50 and grown to exponential phase at OD600~0.4, either in MHB-II and in TSB. The bacterial culture was diluted 1:500 and transferred to a microdilution plate together with the compounds. The plate was then sealed and incubated overnight at 37 °C. After incubation, the plates were examined for microbial growth. A CFU assay was performed to estimate the final concentration of the 1:500 diluted culture. The expected concentration range was 2 × 10 5 -8 × 10 5 CFU/mL. The results were obtained 24 h after incubation. To determine the MBC, 10 µL of each compound concentration from the MIC, was transferred to LB (L3022 Sigma Aldrich) agar plates. The plates were incubated overnight at 37 °C. After incubation, the concentration at which no visible microbial growth was found was considered as the MBC. Materials and Methods Synthesis. All chemicals used were of analytical grade. Procedures for the synthesis and characterization data for the various derivatives and intermediates are detailed in the Supplementary Materials. Determination of minimum inhibition concentration (MIC) and minimum bactericidal concentration (MBC). The minimum inhibition concentration (MIC) of compounds was determined for S. aureus ATCC29213. The concentration range of the compounds were 0.25-512 µg/mL. Tobramycin (T2503, TCI Europe N.V) was used as a control with a concentration range of 0.5-64 µg/mL. One colony of S. aureus was inoculated in 5 mL growth media and incubated overnight in a water bath at 37 °C, 180 rmp. Three biological replicas were used. The overnight cultures were diluted 1:50 and grown to exponential phase at OD600~0.4, either in MHB-II and in TSB. The bacterial culture was diluted 1:500 and transferred to a microdilution plate together with the compounds. The plate was then sealed and incubated overnight at 37 °C. After incubation, the plates were examined for microbial growth. A CFU assay was performed to estimate the final concentration of the 1:500 diluted culture. The expected concentration range was 2 × 10 5 -8 × 10 5 CFU/mL. The results were obtained 24 h after incubation. To determine the MBC, 10 µL of each compound concentration from the MIC, was transferred to LB (L3022 Sigma Aldrich) agar plates. The plates were incubated overnight at 37 °C. After incubation, the concentration at which no visible microbial growth was found was considered as the MBC. Materials and Methods Synthesis. All chemicals used were of analytical grade. Procedures for the synthesis and characterization data for the various derivatives and intermediates are detailed in the Supplementary Materials. Determination of minimum inhibition concentration (MIC) and minimum bactericidal concentration (MBC). The minimum inhibition concentration (MIC) of compounds was determined for S. aureus ATCC29213. The concentration range of the compounds were 0.25-512 µg/mL. Tobramycin (T2503, TCI Europe N.V) was used as a control with a concentration range of 0.5-64 µg/mL. One colony of S. aureus was inoculated in 5 mL growth media and incubated overnight in a water bath at 37 °C, 180 rmp. Three biological replicas were used. The overnight cultures were diluted 1:50 and grown to exponential phase at OD600~0.4, either in MHB-II and in TSB. The bacterial culture was diluted 1:500 and transferred to a microdilution plate together with the compounds. The plate was then sealed and incubated overnight at 37 °C. After incubation, the plates were examined for microbial growth. A CFU assay was performed to estimate the final concentration of the 1:500 diluted culture. The expected concentration range was 2 × 10 5 -8 × 10 5 CFU/mL. The results were obtained 24 h after incubation. To determine the MBC, 10 µL of each compound concentration from the MIC, was transferred to LB (L3022 Sigma Aldrich) agar plates. The plates were incubated overnight at 37 °C. After incubation, the concentration at which no visible microbial growth was found was considered as the MBC. It has been shown that the growth media play an important role in the outcome of bacterial susceptibility to different charged peptides. Antimicrobial assays were performed in MHB cation-adjusted medium, a complex growth medium [39], and also in the less complex medium TSB [21]. In TSB we achieved approximately equal susceptibility results, uniform growth, and less variation in the repeated independent experiments. Unexpectedly, in both sets of experiments we noticed that at high concentrations the active compounds lost their ability to inhibit the growth of the microorganism. In particular, in the MHB-II medium, compound 1 lost its activity at concentrations higher than 8 µg/mL, compounds 10, 15, 27, and 32 at concentrations higher than 32 µg/mL, and compound 33 at concentrations higher than 16 µg/mL. A similar behaviour for all the compounds was observed in the TSB medium. These results could be explained, considering a self-aggregation process of the tested compounds in the solvent system. In the MHB-II medium, the MICs of tested compounds ranged from 2 to 256 µg/mL. The majority of compounds showed detectable antimicrobial activity in the MIC range of 2-16 µg/mL. The removal of the double bond (compound 10; MIC 4 µg/mL), as well as the reduction of the double bond (compounds 33; MIC 2 µg/mL) and the replacement with the triple bond (compound 32; MIC 4 µg/mL), gave compounds which maintained a significant activity. Conversely, the replacement of the double bond with an amide group (compound 15) was deleterious (MIC 16 µg/mL). Also, the replacement of ring A with a catechol was not successful in terms of activity, as compound 27 had a MIC of 16 µg/mL. Compound 35, obtained by opening the benzofuran system, showed a very high MIC (256 µg/mL). This result confirmed that the heterocyclic ring plays an essential role for antimicrobial activity. Materials and Methods Synthesis. All chemicals used were of analytical grade. Procedures for the synthesis and characterization data for the various derivatives and intermediates are detailed in the Supplementary Materials. Determination of minimum inhibition concentration (MIC) and minimum bactericidal concentration (MBC). The minimum inhibition concentration (MIC) of compounds was determined for S. aureus ATCC29213. The concentration range of the compounds were 0.25-512 µg/mL. Tobramycin (T2503, TCI Europe N.V) was used as a control with a concentration range of 0.5-64 µg/mL. One colony of S. aureus was inoculated in 5 mL growth media and incubated overnight in a water bath at 37 • C, 180 rmp. Three biological replicas were used. The overnight cultures were diluted 1:50 and grown to exponential phase at OD600~0.4, either in MHB-II and in TSB. The bacterial culture was diluted 1:500 and transferred to a microdilution plate together with the compounds. The plate was then sealed and incubated overnight at 37 • C. After incubation, the plates were examined for microbial growth. A CFU assay was performed to estimate the final concentration of the 1:500 diluted culture. The expected concentration range was 2 × 10 5 -8 × 10 5 CFU/mL. The results were obtained 24 h after incubation. To determine the MBC, 10 µL of each compound concentration from the MIC, was transferred to LB (L3022 Sigma Aldrich) agar plates. The plates were incubated overnight at 37 • C. After incubation, the concentration at which no visible microbial growth was found was considered as the MBC. Conclusions The resveratrol dimer dehydro-δ-viniferin, containing a benzofuran core, has been identified as a promising antimicrobial compound. As part of the research for new antimicrobials, our recent interest has been directed to the synthesis of new dehydro-δ-viniferin analogues, to gain insights into the structural determinants for their activity. We investigated various protocols to access stilbenoid-derived 2,3-diaryl-5-substituted benzofurans, evidencing critical steps such as the demethylation of phenolic groups. Following these strategies, we prepared a focused collection of analogues, which were tested to evaluate their antimicrobial activity. Because of the modular nature of the synthetic approaches, ready access to diversity-oriented libraries of stilbenoid derived-benzofurans could be available. Our study has evidenced that the styryl moiety, appended at C5 of the benzofuran ring, can be modified without affecting the antimicrobial activity of the compounds. Notably, the removal of the double bond (compound 10) andits conversion into a rigid linear triple bond (compound 32), or into a more flexible saturated chain (compound 33), gave compounds which were still endowed with significant antimicrobial activity. In this context, the simplified analogue 10 could represent a promising model compound for further development and investigation.
6,416.2
2021-12-01T00:00:00.000
[ "Chemistry" ]
New hybrid multivariate analysis approach to optimize multiple response surfaces considering correlations in both inputs and outputs Quality control in industrial and service systems requires the correct setting of input factors by which the outputs result at minimum cost with desirable characteristics. There are often more than one input and output in such systems. Response surface methodology in its multiple variable forms is one of the most applied methods to estimate and improve the quality characteristics of products with respect to control factors. When there is some degree of correlation among the variables, the existing method might lead into misleading improvement results. Current paper presents a new approach which takes the benefits of principal component analysis and multivariate regression to cope with the mentioned difficulties. Global criterion method of multiobjective optimization has been also used to reach a compromise solution which improves all response variables simultaneously. At the end, the proposed approach is described analytically by a numerical example. Introduction Making decisions about complex problems involving process optimization and engineering design strongly depends on well identified effective factors.From the viewpoint of quality, a process should be designed so that the products could satisfy customer's needs.Quality engineering techniques try to find the interrelations between input parameters and output quality characteristics (also called response variables) as well as to improve outputs. A common problem in product or process design is to determine optimal level of control variables where there are different outputs, which are often highly correlated.This problem is called multi-response optimization (MRO) with correlated responses. Several studies have presented approaches addressing multiple quality characteristics but few published papers have focused primarily on the existence of correlation. Correlation can also meaningfully affect the analysis of MRO problem in another way.Nuisances in experiments may be classified into the following three categories (MONTGOMERY, 2005). 'Known and controllable variables' that are controllable, but their effect is not of interest as a factor.For this kind of nuisance, a technique called blocking can be used to systematically eliminate its effect in the statistical analysis. 'Unknown and uncontrollable variables', that is, the existence of the factor is unknown and it may even be changing levels while the experiments are conducted.Randomization is the design technique used to analyze such a nuisance factor. 'Known and uncontrollable variables', especially, it could be measured during the experiment runs called covariates.In this case, finding individual effect of covariate and their interaction with other variables could help analysts to improve response values. Complex process or system may be affected by stochastic covariates which can be correlated.The correlation among inputs adds more complexity in estimation as well as optimization. This paper proposes a methodology that can analyze correlated multiple response surfaces fitted on control factors and correlated covariates.Global criterion (GC) method of vector optimization is also applied since there are several output characteristics to be optimized. The structure of the remaining part of this paper is as follows.The next section provides a summary of MRO approaches with special focus on correlated responses and correlated covariates.Afterwards, the required information about the proposed methodology is provided.Finally, section 4 illustrates the method by a numerical example. In multiresponse modeling there are often three types of variables: Factors, nuisances and responses.When a significant degree of correlation exists among the variables, the standard methods cannot estimate the model precisely and, consequently, the optimization results might be unreliable.Modeling and optimization of correlated response surfaces have been recently heightened by many researchers.Chiao and Hamada (2001) considered experiments with correlated multiple responses whose means, variances, and correlations depended on experimental factors.Analysis of these experiments consists of modeling distributional parameters in terms of the experimental factors and finding factor settings which maximize the probability of being in a specification region, i.e., all responses are simultaneously meeting their respective specifications. It is assumed that the multiresponse set has a multivariate normal distribution and also that each response variables is desired to be within a predefined specification region.Kazemzadeh et al. (2008) applied multiobjective goal programming model to provide a general framework for multiresponse optimization problems.Shah et al. (2004) used the seemingly unrelated regressions (SUR) method for estimating the regression parameters where there are correlated dependent variables.The method can be useful in MRS problem with correlated responses and leads to a more precise estimate of the optimum variable setting.PCA is a well-grounded statistical multivariate technique for dimension reduction and making independent components from a set of correlated variables.Tong et al. (2005) used PCA to convert correlated response variables to ordinary response surfaces and also applied a multi-criteria decision-making method called TOPSIS to aggregate several quality characteristics.Antony (2000) used PCA with Taguchi's method.In this method, it is assumed that only those components whose eigenvalues are greater than one can be selected to form final response variables.Thus, their method could not be applied if the problem has more than one component with such characteristic.Tong et al. (2005) determined the optimization direction of each component based on corresponding variation mode charts.Furthermore, Wang (2007) used TOPSIS to find an overall performance index as a criterion for optimizing the multiple quality characteristics. In order to analyze covariates in MRO problem some research studies have recently been conducted.Hejazi et al. (2011) According to the literature, many works have been conducted on using Principal Components Analysis (PCA) to solve correlated multiresponse problems.PCA converts several correlated columns to independent components by linear transformations.These components are then substituted into multiple original responses.Another approach to solve this problem is based on prediction of the correlation as an individual response variable by Response Surface Methodology (RSM).Each of the mentioned approaches has specific benefits and limitations.It seems a sensible claim that PCA cannot provide proper directions for optimization of components.Moreover, if the number of selected components is less than the number of original responses, some information is lost.Consideration of correlation coefficients as separate response variables requires multi-replicated design for experiments.Additionally, the accuracy of estimated correlation is strongly dependent on the number of replications.However, more experiment runs are more costly and time-consuming.Furthermore, even though there are enough experimental runs, the statistical error in response regression is unavoidable.The last approach in solving multiresponse optimization problem is multivariate regression method that is very useful when response variables are correlated. The proposed method aims to consider all of location effects and correlation among the responses.In addition, probabilistic covariates are included into the multiresponse model to reduce error terms and uncovered variance. Material and methods When the problems involve several equations with common variables, it is recommended to estimate the parameters through a system of equations simultaneously.Various methods such as Ordinary Least Squares (OLS), Cross-Equation Weighting method, SUR, Two-Stage Least Squares (2SLS), Weighted Two-Stage Least Squares (WTSLS), Three-stage Least Squares (3SLS), Full Information Maximum Likelihood (FIML), and the Generalized Method of Moments (GMM) have been proposed to solve such problems.Among them, SUR and FIML methods have been used in this paper to estimate the response surfaces simultaneously. The SUR method, also known as the multivariate regression, or Zellner's method, estimates the parameters of the system, accounting for heteroscedasticity and contemporaneous correlation in the errors across equations. Full Information Maximum Likelihood (FIML) estimates the likelihood function under the assumption that the contemporaneous errors have a joint normal distribution. The aforementioned methods are compared with respect to the main characteristics in Table 2.In this study, there are two main approaches included in the proposed methodology to analyze correlation among the inputs as well as the outputs.The covariates are initially transformed by PCA to remove their correlation and after that, the response surfaces between correlated response variables and input (including PCs and control factors) are fitted through a simultaneous equations system. Consecutive steps of the proposed approach are as follows: Step 1: Identify input and outputs variables.In this step, all potentially effective variables (namely responses, factors, covariates and other nuisances) should be identified. Step 2: Select a proper design and run the experiments. A proper design is selected for conducting the experiments regarding the number of variables and their levels. Perform PCA on correlated covariates to get independent components (see appendix (A) for more details about PCA). Step 4: Develop a system of equations.4) a. Perform an initial RSM to get an insight about the more effective factors on each response.4) b.Define an equation for relations between each response and other variables. Next, enter each response variable and related factors as an equation into the system.In addition let each response be considered as a predictor variable for other ones. Step 5: Estimate parameters of the system. If the error terms are normally distributed, use FIML, otherwise perform ISUR method to estimate the coefficient of effects. Step 6: Construct multi-objective optimization model including the following objective functions. -Response surfaces related to quality characteristics. -Probability function of the PCs derived by using PCA transformation equations and probability function of original covariates. Step 7: Apply Global Criterion (GC) method to solve the multi-objective optimization model. In Section 4 these steps are discussed in details. Model representation A general multiresponse problem can be expressed as: Subject to: ; (1) where: ˆ( ) i R x represents response surface for ith quality characteristic; ( ) j f pc is the probability function of jth PC; x is vector of control factors; c is covariate vector calculated by inverting the PCA transformation. Furthermore, it is assumed that the process is statistically under control and the control range for covariate vector is [ lcl, ucl ]. Optimization method (Global Criterion) This method allows one to transform a multiobjective optimization problem into a singleobjective problem.The function traditionally used in this method is distance.The multi-objective method can be written as follows: where T i is the optimum value of problem objective function when only ith objective was considered; wi is a value representing importance of each objective; di is the range of ith response within the observed experimental runs (DONOSO; FABREGAT, 2007).In this study GC method was applied to convert problem into single objective form. Results and discussion This section is organized to demonstrate the computational steps of the proposed approach.For this purpose, a numerical example from the literature is considered with some modifications (MONTGOMERY, 2005). Step 1: A chemical experiment with three controllable variables and two covariates is designed to be analyzed by the proposed method.The outputs are conversion (Y1) and activity (Y2) levels.Humidity (c1) and environment temperature (c2) are considered as probabilistic covariates. Step 2: A CCD design is selected and the experiments are conducted accordingly.Table 3 shows the results of experiments gathered by a Central Composite Design (CCD). Step 3. PCA is performed on Humidity and Temperature factors.According to the observations, they have the following probability distribution.Since, there is a significant linear relationship between two covariates, it is reasonable to consider a bivariate distribution for their treatments.It may be observed that these two covariates follow a normal distribution with the following parameters: (3) Consider the above distributions as marginal probability functions of c1 and c2.Therefore, the bivariate normal probability distribution for the covariates can be estimated as follows: (4) PCA gives the following equations to transform the set of covariates into a set of independent ones (The required calculations are performed in Minitab statistical package). (5) Step 4. Understanding the strong effects helps us to fit better surfaces of response variables.Therefore, Figure 1 is provided to show the effects graphically and separate RSMs have been initially conducted on each response to guess which predictive terms should be included in the estimation.The results showed that the following terms would be considered to construct the system of equations. In this case, the problem is analyzed by Iterative Seemingly Unrelated Regression (ISUR) and FIML.The response surfaces regressed by the mentioned methods are given below in Table 4 (Eviews statistical package has been used to estimate the parameters in system). Table 4.Estimated equations in the system using FIML and ISUR method. Method Estimated system ISUR (6) The last constraints calculate the original value of covariates by inverting the transformation matrix (A) and ensure that the covariates are within the prespecified statistical control limits.The following calculations are required to calculate the probability function of the PCs. Theorem 1-If C is vector of p random variables jointly distributed by N p (µ c , ∑ c ), and A is a q p matrix, then the distribution of PC = AC remains a multivariate normal with the following parameters (Proofs are available in Rencher and Schaalje ( 2008)).9) is a nonlinear programming due to the first two objective functions.It can be simplified to quadratic programming model by considering this point that the mode value of each normal distribution occurs at mean value.Therefore, the maximum probability equals to minimum distance form mean value. With this property of normal distribution, the final multiobjective quadratic programming can be written as: Subject to: The same constraints (11) Table 5 gives a summary of optimal solutions obtained by solving the above model for each objective functions separately.According to Table 6, the final multi-objective mathematical model using Global Criterion can be constructed by replacing the objective functions of the above multi-objective program as Equation ( 6). (12) In this example, we consider the same important degrees for all objective functions.Table 6 shows the optimal solution and the related objective values for this example. The results support the claim that the method which applies PCA on outputs cannot correctly find optimization direction.But the application of PCA to solve co-linearity among covariates would lead into better and more accurate estimations.It is also observed that most probable values of covariates would lead into the more reliable results.The PCA method reaches the target of first objective due to the large coefficient of first response in the first PC.It seems PCA is more useful for correlated predictors rather than correlated multiresponse problems.Most existing MRO works used PCA to gain uncorrelated responses, but they usually disregarded the proper direction of location effects.Moreover, the proposed methodology has following main features: The effects of covariates with known distribution function can be identified in this approach, PCA is used to solve co-linearity issues when there are meaningful dependencies among the covariates. Several objective functions and performance indices of a quality engineering problem can be optimized simultaneously by using GC method, The desired direction for optimization of responses doesn't change after modeling and optimization. Conclusion This study proposes a new hybrid approach on multiresponse optimization in which PCA method applies to handle co-linearity among the covariates and uses multivariate system regression to predict the correlated responses.Current study tries to model the multiresponse-multicovariate problem in a simultaneous system of equations and use the estimated equations to construct an optimization program. For further studies, the mixed set of categorical and numerical responses is suggested.In this work, only the variances of observed values were considered.Therefore, the variances of predicted responses can be another future research on this subject. FIMLFigure 1 . Figure 1.Matrix plot for the experimental data. is the transpose of matrix A. According to Theorem 1, the distribution function of the PCs is given below.As shown above, the new components have zero covariance so their probability distributions can be expressed by two individual and univariate normal variables.pc1~N(15.3,6.682) and pc2~N(-0.4,0.029) Now, model represented by Equation set (6) can be explicitly formed as: analyzed by the methodology, Table 1 . Comparative study of the major works on MRO with correlated data. Table 2 . Characteristics of the major methods of system estimation. Table 3 . Results of designed experiments for numerical example. Table 5 . Trade off matrix and required parameters of GC method. Table 6 . Optimal results of the numerical example.
3,619
2014-02-26T00:00:00.000
[ "Mathematics" ]
ALICE summary of light flavour results at intermediate and high pT The ALICE experiment has unique capabilities for particle identification at mid rapidity over a wide range of transverse momenta (pT), making it an ideal tool for comprehensive measurements of hadrons such as charged pions, kaons, and protons as well as Λ, Ks0 and Φ. The transverse momentum distributions and nuclear modification factors, RpPb and RPbPb, of these hadrons measured in p-Pb and Pb-Pb collisions are presented. Baryon-to-meson ratios exhibit a multiplicity-dependent enhancement at intermediate transverse momenta for both p-Pb and Pb-Pb collisions, while no significant dynamics is observed in the ratios at larger transverse momenta. Finally, measurements of identified particle ratios in association with high-pT particles as well as within reconstructed jets are presented. Introduction During LHC Run-1 the ALICE detector has recorded pp, p-Pb, and Pb-Pb collisions at different center of mass energies. Heavy-ion collisions at ultra relativistic energies are expected to produce a QCD matter where the quarks and gluons are in a deconfined state. Measurements of the production of hadrons in Pb-Pb collisions at intermediate and high p T , relative to pp collisions, provide information about the dynamics of this matter. In the context of light-flavour production, the focus for the Pb-Pb results is on parton energy loss -expected to lead to a modification of energetic jets (jet quenching) -and possibly modified fragmentation due to the hot and dense QCD medium. The excellent tracking and particle identification capabilities of the ALICE experiment, in particular its large time projection chamber, makes it possible to investigate the spectra of baryons and mesons. The results are presented in terms of particle ratios and nuclear modification factors, R AA and R pA . 2. R AA and R pA for charged hadrons The nuclear modification factor is defined as the ratio of the particle yield in Pb-Pb to that in pp collisions scaled by the number of binary nucleon-nucleon collisions where d 2 N AA /dηdp T is the differential particle yield in Pb-Pb collisions, d 2 σ pp /dηdp T is the invariant cross section for particle production in inelastic pp collisions, and T AA is the average nuclear overlap function [1]. In the absence of nuclear modifications R AA is unity for hard processes which are expected to exhibit binary collision scaling. The nuclear modification factor presented in Fig. 1, shows that the shape of the invariant yield for peripheral Pb-Pb collisions at √ s NN = 2.76 TeV is similar to those observed in pp collisions due to the flatness of the R PbPb , while a strong suppression of charged hadron production at high p T is observed for central collisions. To establish whether the initial state of the colliding nuclei plays a role in the observed suppression, also the nuclear modification factor in p-Pb for charged particles is shown in the same figure. R pPb is consistent with unity for p T > 2 GeV/c, and hence the suppression in Pb-Pb collisions is not due to any initial state effects, but to final nuclear matter effects, such as jet quenching in the hot QCD medium. (GeV/c) 3. R AA and R pA for identified hadrons When constructing R AA for identified light flavour hadrons, we see in Fig. 2 that, within systematic and statistical uncertainties, they are equally suppressed at p T > 10 GeV/c. The large suppression is a sign of considerable energy loss, and the R pA seen in Fig. 3 establishes that this energy loss is predominantly due to the medium and not caused by initial state effects. For the intermediate p T range, the protons are less suppressed than the kaons and pions, and a mass ordering is present in the suppression pattern. While the proton and φ modification factors exhibit rather distinct features, the φ/p ratio in central Pb-Pb is observed to be approximately constant as a function of p T , as discussed in Sec. 4, indicating that the differences in the R AA can be attributed to different pp spectra shapes. Looking at identified particles at intermediate p T in p-Pb collisions instead, we see in Fig. 3 an enhancement of Ξ and p, while K and π are consistent with T AA -scaled pp values. Furthermore, there is a mass ordering among π, K, p, Ξ, but the φ does not fit into this pattern. 1.8 4. Λ/K 0 s , p/π and K/π ratios in Pb-Pb collisions In Pb-Pb collisions, both Λ/K 0 s and p/π ( Fig. 4 [2], [3]) in central and peripheral collisions are consistent with pp for p T > 8 GeV/c, indicating that the processes are dominated by vacuum-like fragmentation. Looking in the intermediate p T range for Λ/K 0 s , an enhancement is visible towards more central collisions, see Fig. 4, and a shift of the maximum position towards higher p T is observed: in the most peripheral collisions (60-80% centrality) there is a maximum of about 0.55 at p T ∼2 GeV/c, while the maximum value of the ratio for the most central collisions (0-5% centrality) is about 1.6 at p T ∼3.2 GeV/c. This shift is consistent with an increasing radial flow towards more central collisions. The magnitude of these maxima increases by almost a factor of three between most peripheral and most central Pb-Pb collision. A hydrodynamical model such as VISH2+1 [4] is able to describe the rise at low p T . At higher p T , models with modified fragmentation (EPOS [5], [6]) and coalescence of quarks (Recombination [7]) describe the shape qualitatively well, but overestimate the enhancement [2]. p/π and K/π ratio as a function of p T in central (0-5%) Pb-Pb collisions at √ s NN = 2.76 TeV compared to pp collisions at √ s = 7 TeV and models [3]. Figure 4 also shows the p/π and K/π ratio up to p T = 20 GeV/c for central events, both presenting an enhancement at intermediate p T , with the peak at p T = 3 GeV/c. However, the baryon-to-meson ratio p/π presents a much more pronounced increase, reaching a value of about 0.9 at p T = 3 GeV/c, compared to the two-meson ratio K/π. As for the Λ/K 0 s case, the ratios are in good agreement with hydrodynamical calculations (Krakow [4]) for p T < 2 GeV/c, indicating that the rise of the peak can be described by the mass ordering induced by radial flow. At intermediate p T , around the maxima and up to p T ∼8 GeV/c, the data are qualitatively described by the recombination model by Fries et al. [7], and the EPOS model [5], [6], but these models also overestimate the maximum values. p/φ in Pb-Pb collisions To further investigate the main driving parameter in the spectral shape, we study a baryon-tomeson ratio in which the baryon and meson are of similar mass, namely the p/φ ratio. In Fig. 5 [8] the p/φ ratio is shown as a function of p T , and it is observed that in central Pb-Pb collisions there is a very small difference in their p T distributions, i.e. no baryon-meson difference is present. This indicates that the hadron mass determines the spectral shape. Particle ratios in p-Pb collisions Interestingly, the Λ/K 0 s and p/π in p-Pb collisions show the same qualitative behavior as in Pb-Pb collisions: a multiplicity dependent baryon-to-meson enhancement at intermediate p T ∼3 GeV/c is seen in Fig. 6 [9] for two different multiplicity event classes. The results show that p-Pb presents features that are similar to Pb-Pb phenomenology, even though the magnitude of the enhancement in p-Pb is significantly different to the one observed in Pb-Pb. The maximum of the p/π ratio reaches 0.8 in central Pb-Pb collisions, but only 0.4 in the highest multiplicity p-Pb events, and the Λ/K 0 s maximum in central Pb-Pb is 1.5, while it is 0.8 in corresponding p-Pb collisions. The highest multiplicity bin in p-Pb collisions exhibits ratios of p/π and Λ/K 0 s which have maxima close to the corresponding ratios in the 60-70% centrality bin in Pb-Pb collisions, but differ somewhat in shape at lower p T [9]. 7. The origin of the enhancement One can investigate wether the origin of this enhancement is due to parton fragmentation (hard) or collective effects (soft) by a two-particle correlation study, where the particles produced in the underlying events, the bulk, are separated from those which are associated with a high-p T trigger particle, representing a jet-like environment, or the peak region. The peak is defined as a region around (∆η, ∆ϕ) = (0, 0), and the bulk region around (∆η, ∆ϕ) = (±1, 0) where one, due to long range (in rapidity) azimuthal correlations, expects the flow structure of the underlying event to be the same as under the peak. To study the jet contribution, the bulk is subtracted from the peak region (in Fig. 7: "Peak-Bulk"). In Fig. 7 (top) [10], the p/π ratios in central Pb-Pb events are presented for bulk and for peak-bulk event selections, and it is seen that the enhancement is a bulk effect and nt present in jet events. In the p-Pb study, charged particle jets are reconstructed on an event-by-event basis using an anti-k T algorithm with resolution parameter R = 0.2, 0.3, or 0.4 and requiring one charged track with p T > 10 GeV/c. The Λ and K 0 s yields are measured within the jet cone and corrected for the underlying event before the ratio is taken. When the ratio is compared to the inclusive ratio, the same conclusion as for the p/π ratio can be drawn: that the baryon-to-meson enhancement originates from the bulk, and is not present in the jet structure. Figure 7. Top: p/π ratio as a function of associated particle p T , in bulk and peak-bulk for 0-10% central Pb-Pb collisions at √ s NN = 2.76 TeV, with a leading (trigger) particle p T between 5-10 GeV/c [11]. Bottom: Λ/K 0 s ratio in jet with different radii reconstructed with the anti-k T method compared to PYTHIA8 and the Pb-Pb inclusive ratio (full black circles) for 0-10% central p-Pb collisions at √ s NN = 5.02 TeV. Conclusions At high p T we observe a suppression of identified particle production due to parton energy loss. The same suppression is seen for all light quark systems created in Pb-Pb collisions, which suggests that the chemical composition of leading particles from jets in the medium is similar to jets produced in vacuum. No suppression in p-Pb collisions is seen, indicating that the suppression observed in Pb-Pb is a final-state hot-matter effect. At intermediate p T , the particle ratios show a baryon-to-meson enhancement which in Pb-Pb is understood in the coalescence and/or hydrodynamic flow picture. In p-Pb collisions we see similar features, but less pronounced, as in Pb-Pb. By separating the underlying events from the jet-like structures, we note that the baryon-to-meson enhancement seems to be an effect arising in the underlying events in both Pb-Pb and p-Pb collisions, while the jet-like contributions appear to be unmodified.
2,601.2
2015-01-01T00:00:00.000
[ "Physics" ]
Interaction graph-based characterization of quantum benchmarks for improving quantum circuit mapping techniques To execute quantum circuits on a quantum processor, they must be modified to meet the physical constraints of the quantum device. This process, called quantum circuit mapping, results in a gate/circuit depth overhead that depends on both the circuit properties and the hardware constraints, being the limited qubit connectivity a crucial restriction. In this paper, we propose to extend the characterization of quantum circuits by including qubit interaction graph properties using graph theory-based metrics in addition to previously used circuit-describing parameters. This approach allows for an in-depth analysis and clustering of quantum circuits and a comparison of performance when run on different quantum processors, aiding in developing better mapping techniques. Our study reveals a correlation between interaction graph-based parameters and mapping performance metrics for various existing configurations of quantum devices. We also provide a comprehensive collection of quantum circuits and algorithms for benchmarking future compilation techniques and quantum devices. Introduction Quantum technology has experienced rapid development in the past decades and has the potential to solve some classically intractable problems.Its contributions are still in the early stage, as current so-called Noisy Intermediate-Scale Quantum (NISQ) devices can only handle simple, small-sized algorithms considering they are limited by size and noise.They also encompass additional hardware constraints such as low qubit connectivity, reduced supported gate set, and limitations related to classical-control resources, which makes it even more difficult to execute a quantum circuit on these processors successfully. Quantum algorithms, usually represented as quantum circuits, are hardware-agnostic; that is, when described, they do not consider hardware restrictions.To execute such algorithms (quantum circuits) on a quantum processor, they must be modified to fulfill the processor's limitations through a process called quantum circuit mapping.The quantum circuit mapper, which is part of the compiler, is then at the core of the full-stack quantum computing system, connecting algorithms with quantum devices [1]. Various techniques have been proposed to deal with the mapping of quantum circuits [2][3][4][5][6][7][8][9][10], which differ in approach (exact or heuristic, local or global solution), methodology (e.g.SMT solver [11]), cost functions (optimizing number of gates or circuit depth) and performance metrics (e.g.circuit fidelity).These solutions, however, adopt a bottom-up approach, developing mappers specifically for certain quantum processors and technologies.The majority of quantum circuit mapping techniques have mostly focused on hardware properties [4,12] and only considered a rather limited set of algorithm characteristics such as number of qubits, number of quantum gates, two-qubit gate percentage, and qubit interactions (i.e., what pair of qubits perform a two-qubit gate).In addition to this, when mapping outcomes are analyzed, the focus is on the values of the obtained metrics without further evaluating why some circuits show higher or lower overheads.Some works have already pointed out the importance of including more algorithm features in the mapping process [13].A more complete and in-depth profiling of quantum circuits will help to: i) have a deeper understanding on why specific algorithms have higher fidelity than others when being run on a particular processor using a specific mapping technique; ii) to categorize (cluster) quantum circuits based on those parameters and predict the performance of additional circuits with similar properties in terms of mapping-related metrics, without actually running them on a given device; and iii) to develop application-driven and hardware-aware mapping techniques (i.e., mapping techniques tailored for a specific set of algorithms in addition to overcoming hardware constraints) [1,14,15].Note that more broadly, this characterization of quantum circuits will be also crucial for defining a meaningful and complete set of quantum benchmarks to evaluate not only quantum circuit mapping techniques but also full-stack quantum computing systems as well as for having a set of algorithm-level metrics to measure system performance [16]. One of the most stringent quantum hardware constraints that quantum circuit mapping techniques have to deal with, is the limited connectivity of physical qubits, which restricts possible interactions between them.Therefore, in this paper, we propose to extend the profiling of quantum circuits/algorithms by not only extracting 'standard' parameters like the number of qubits and gates and percentage of two-qubit gates, but also by performing a deeper analysis of their qubit interaction graphs (i.e., representation of the two-qubit gates or qubit interactions of the circuit).By taking input from graph theory and machine learning, we characterize quantum circuits based on their interaction graph metrics (e.g., average shortest path, connectivity, clustering coefficient).We then map those quantum circuits into several quantum processors using a specific quantum circuit mapping technique.In future work, we will also use different quantum circuit mapping configurations, allowing us to evaluate what quantum circuit features impact the circuit mapping performance the most and identify what combination of mapping technique-quantum hardware works better for a given (set of) algorithm(s).Note that this analysis can in the future help in the codesign of algorithm-driven compilation methods and quantum hardware. In addition, we present a categorized and as of now the most comprehensive set of quantum algorithms (benchmarks) from various sources and platforms and in different quantum programming languages.Most of the currently existing and used quantum algorithms, synthetically generated and application-based circuits are included in this collection and classified based on different criteria.We are hoping that this algorithms/circuits set will be used for benchmarking quantum computing systems as well as parts of it, such as compilation techniques. The main contributions of this work are: 1. We have performed the first characterization and clustering of quantum circuits that also considers qubit interaction graph parameters in addition to the characteristics related to circuit size (number of gates, number of qubits, amount of two-qubit gates).In-depth profiling and clustering of quantum circuits based on their more structural parameters help to analyze why and when some (families of) quantum algorithms show better performance compared to the rest when being executed on a given quantum processor, as well as which circuit parameters have a higher impact on performance for some hardware-compiler setups.Subsequently, that can also help to predict the mapping performance for additional circuits with similar properties, without actually running them on a given device, and therefore assist in recommending an adequate mapper and hardware configuration to use.Finally, this circuit structural parameters analysis step is crucial for the development of future application-based quantum devices and mappers.2. We have found that quantum circuits similarly structured in terms of their interaction graph parameters will have comparable results in terms of circuit fidelity and gate overhead when mapped on the same quantum device and by using the same mapping technique.By running these groups of circuits with different hardware configurations we could make clear suggestions on which group of circuits fits which hardware better.3. We provide the so-far most comprehensive collection of quantum benchmarks, open-source and available in most currently used high-or low-level quantum languages.The goal is to help the quantum community speed up the research process and in the development of a full-stack quantum system by having an easily accessible, all-in-one-place set of benchmarks that can be used for analyzing the performance of existing and future quantum processors and compilation methods. The paper is organized as follows: Sect. 2 presents a short introduction to full-stack quantum computing systems and an overview of the current state-of-the-art quantum circuit mapping techniques as well as benchmark characterization.Sec. 3 introduces our profiling of quantum algorithms and their clustering based on size and structure.The experimental setup with the details of our benchmark collection is included in Sec. 4. Sec. 5 showcases the obtained results on how the mapping performance of quantum circuits when run on a specific chip relates to their structural parameters acquired from the analysis of their interaction graphs and their clusters from Sec. 3. Finally, in Sec.6 and Sec. 7, conclusions and future work are presented. 2 Background and related work Quantum computers nowadays Quantum hardware has significantly progressed since its inception, and a wide variety of technologies has been developed for implementing qubits like solid-state spins, trapped-ion qubits or superconducting qubits [17].Hardware characteristics like the number of qubits and gate fidelity are continuously improving.However, current NISQ devices are still immensely resource-constrained and error-prone.They are not able to keep up with the development of promising quantum algorithms, that might achieve exponential speed-up, as they lack in size (number of qubits), which is required for the implementation of fault-tolerant and error-corrected techniques.Therefore, it was inevitable to develop a set of algorithms that could be successfully executed on current processors, coming from different fields like quantum physics, chemistry, or machine learning [18]. Quantum compilers act like intermediaries between algorithms (expressed as quantum circuits) and quantum processors.They not only translate high-level programming language instructions (e.g., library Qiskit given in Python [19] ) into low-level ones (quantum assembly-like language, e.g., OpenQASM [20]), but are also responsible for making transformations and optimizations of the quantum circuit to best fulfill the quantum hardware requirements.The compiler design and complexity highly depend on the constraints imposed by the hardware and chosen technology.In nearest-neighbor architectures (e.g., 2D array of qubits), the primary constraint is the limited connectivity among qubits.As running two-qubit gates requires that the paired qubits are adjacent on the chip, restricted connectivity can become a huge obstacle.The compiler tries to overcome that and other limitations and helps to successfully execute a quantum circuit on a given quantum device through a process called mapping.Note that the mapping of quantum circuits usually results in a gate and latency overhead that in turn decreases the circuit fidelity.Therefore, having efficient mapping techniques is crucial in the NISQ era not only to successfully execute quantum algorithms but also for extracting the most out of constrained NISQ devices. Computing with NISQ devices One of the motivations for building quantum computers in the first place is to run algorithms that solve problems that are intractable for existing classical computers due to limitations in speed and memory.Current NISQ devices can only handle simple algorithms, in terms of the number of qubits and gates and circuit depth, as the presence of noise and limited resources (physical qubits) still constrain them: quantum operations have high error rates and qubits decohere over time resulting in information loss.On top of that, running an algorithm on a NISQ device is not a straightforward process.That is due to hardware constraints that affect the algorithm execution, which can vary between quantum technologies. One of the restrictions that affects the execution of a quantum algorithm the most is (limited) qubit connectivity.That applies to most technologies, including superconducting qubits and quantum dots, where qubits are arranged in a 2D grid or some other not-fully connected topology, as shown in the top-right part of Fig. 1, allowing only nearest-neighbor interactions.In order to perform a two-qubit gate in such architecture, the two interacting qubits in the circuit have to be placed in neighboring physical qubits on the chip, which is not always possible (see Fig. 1: two two-qubit gates between virtual qubits 1 and 5, and 5 and 6 cannot be directly performed because they do not share a physical connection in the coupling graph).Other constraints that have to be considered are: i) primitive gate set -the gates of the circuit to be executed do not always match the native gate set (supported gates) of the quantum chip.For instance, to run the quantum circuit shown in Fig. 1 on the Surface-17 chip [12], its CNOT gates would have to be decomposed into X and Y rotations and CZ-gate supported by the device; ii) classical control constraints -shared electronics help to scale up quantum systems but may limit parallelization of quantum operations during circuit execution.The process of accommodating these requirements imposed by the quantum hardware to efficiently execute a quantum algorithm is called quantum circuit mapping. The quantum circuit mapping process consists of the following steps (not mandatory in this order): 1) Adapting the gate set of the circuit to the gates supported by the device; 2) Scheduling quantum operations (qubit initialization, gates and measurements) of the circuit to leverage its parallelism and therefore shorten the execution time; 3) Placing virtual qubits (of the circuit) onto physical qubits (on the actual chip) so that the previously mentioned nearest-neighbor two-qubit-gate constraint is satisfied as much as possible during algorithm execution; and 4) Routing or exchanging positions of virtual qubits on the chip such that all qubits that could not initially interact become adjacent and perform their corresponding two-qubit gates (Fig. 1).This is done by inserting additional quantum gates.How routing is performed and which gates are inserted is technology-dependent with various existing methods (SWAPs, Shuttling).Therefore, the resulting after-mapping circuit will in most cases have more gates and a longer execution time than originally.Due to the previously mentioned highly-erroneous quantum operations and qubit decoherence, the overhead in terms of number of gates and circuit depth caused by the mapping should be minimal as it ultimately impacts the algorithm fidelity. Various approaches have been proposed to solve the circuit mapping problem, each using different methods and strategies.Some solutions are optimal (exact), but work in a brute-force style and are thus only suitable for small circuits [6,11,21].For larger circuits and to allow for scalability, heuristic solutions are a better fit [2,12,22,23].Some methods proposed by related works include the use of SMT solvers [3,11], greedy heuristic [2,6,24,25] and machine learning-based algorithms [10,26,27].These solutions all focus on the 'routing' part of the mapper.In addition to this, it is possible to deal with the mapping problem by optimizing its other stages like scheduling [12,23], gate transformation [27][28][29][30] or initial placement [4,31,32].Different metrics are being used to assess the performance of the quantum circuit mapping technique depending on the cost function: some works have the goal of minimizing the number of gates or gate overhead (e.g., number of additional SWAP gates) [6,8,12,29,30,[32][33][34], some prioritize low circuit depth or latency (circuit execution time) [6,8,12,27,30,33] and finally some focus on the success rate of the circuit [31,35] and maximizing fidelity [3,4,30] by also considering the different error rates of the quantum device.Note that the overall goal in the current NISQ era is to maximize the fidelity and success rate of quantum circuits, which currently mostly depends on the gate and circuit depth overhead.Fig. 2 shows the impact of the number of gates and the gate overhead on the circuit fidelity.However, as shown in Fig. 2(b), not all the circuits end up with the same decrease in fidelity for the same or similar gate overhead.Note that the circuit fidelity is close to 0% for any circuit with more than 500 gates (Fig. 2a).In addition, a gate overhead of over 200% after mapping leads, in most cases, to a 100% fidelity decrease (Fig. 2b). These approaches all have in common that they are designed to adapt quantum circuits to the device-specific properties and constraints considering only a reduced set of algorithm properties such as gate and qubit count and two-qubit gate percentage (including qubit interactions).A more in-depth quantum circuit characterization, which for instance could include characteristics of the qubit interaction graph like the number of times each pair of qubits interacts and the distribution of those interactions among the qubits, and of the quantum instruction dependency graph (i.e., graph that represents the dependencies between gates in the circuit and used for scheduling) is still missing.Looking further into interaction graphs is very beneficial for the quantum circuit mapping process, as like stated before, the most stringent constraint of current quantum hardware is its limited qubit connectivity.Some authors have already pointed out the importance of including application properties [1,5,36,37] and considering the characteristics of the qubit interaction graphs for improving the mapping of quantum circuits [33,38].Even in classical computing, we notice that different computing resources are necessarily based on what we use the computers for and which applications are executed.For instance, a dedicated GPU can be used for highly parallelizable processes.Likewise, thorough profiling can help to identify which algorithm characteristics are required to execute it successfully on a given device and vice-versa.The structural properties of quantum circuits can also help understand why specific algorithms show better success rates than others when being run on a particular processor using a specific mapping technique. Profiling of quantum circuits based on qubit interaction graphs This section provides an overview of the qubit interaction graph-based benchmark profiling and clustering process, emphasizing why this could be meaningful for improving future quantum circuit mapping techniques. On the importance of qubit interaction graphs for quantum circuit mapping Qubit interaction graph G(V, E) is a graphical representation of the two-qubit gates of a given quantum circuit.It is in general a directed connected graph.Fig. 1 shows an example of a quantum circuit Fig. 1(d) along with its interaction graph G i (V i , E i ) representation Fig. 1(a).Directed edges E i represent two-qubit gates and nodes V i are the qubits that participate in them.Since the direction of edges in most cases doesn't influence the execution of the gates, it is sufficient to perceive the interaction graph as undirected for the mapping problem [39].If a circuit comprises multiple two-qubit gates between pairs of qubits, it results in a weighted graph (like in Fig. 3), which shows how often each pair of qubits interacts and how those interactions are distributed among qubits.This additional information can be leveraged to provide more insights into a circuit structure that is otherwise hidden when only considering standard algorithm parameters such as the number of qubits and gates and two-qubit gate percentage.To illustrate this, Fig. 3 shows the interaction graphs of two quantum algorithms, an instance of QAOA and a randomly generated circuit (on the right), which a priori are similar when only characterized in terms of the three common algorithm parameters.What can be noticed is that their qubit interaction graph structure is quite different: the graph of the random circuit is more complex with full connectivity and presents a different distribution of the interactions between qubits, that is, of the weights.This will result in more routing and, therefore, higher overhead, unless we indeed have a fully connected coupling graph of the processor (Sec.5).This shows the importance of quantum circuit structure when developing mapping techniques and the necessity of characterizing the circuits in terms of their qubit interaction graphs.A few works have already pointed out how interaction graph along with quantum instruction dependency graph can be used as a baseline for designing better mapping techniques [2,12,40,41].In those works, gate dependency graphs are used as core information for scheduling optimization and look-ahead techniques, whereas interaction graphs are usually only used for the initial placement of qubits of the routing procedure.Considering that the primary constraint affecting the fidelity of the circuit execution is nearest-neighbor connectivity required for performing two-qubit gates, it would be valuable to know in advance how they are distributed among qubits and not only their quantity. In this paper, we perform profiling of quantum circuits by focusing on interaction-graph properties and their relation to quantum circuit mapping.To that purpose, we took input from graph theory and analyzed qubit interaction graphs based on metrics described in [42] with a focus on those that are relevant to the mapping problem. Quantum circuit profiling in our work consists of the following steps: 1. Benchmark collection -collecting benchmarks (quantum circuits) from various sources, translating them to the same quantum language and extracting their interaction graphs (Sec.4). Parameter selection and extraction -choosing and extracting graph- theory-based parameters from the qubit interaction graph that are relevant to the mapping of quantum circuits.3. Benchmark clustering -clustering benchmarks based on their size-and interaction graph-related parameters. After performing these steps, we compiled the quantum circuits using OpenQL [43] and analyzed the relation between their performance and extracted parameters, as well as clusters (Sec.4 and Sec. 5). Parameter selection for quantum algorithm profiling There exists a vast amount of metrics used for describing graphs, which can be classified into different groups and classes.However, not all of these metrics are relevant to our goal in terms of qubit interaction graph analysis.After thoroughly investigating all metrics described in [42], we chose those that are key for the circuit mapping problem.These metrics, when calculated from the qubit interaction graphs, should represent features of quantum circuits that have a correlation with the mapping performance metrics (e.g.number of SWAPS).For instance, the node degree distribution is a relevant metric as it defines the connectivity of the graph (i.e.density of qubit interactions).The more connected the graph, the higher the node degrees.In case there is an all-to-all connected interaction graph all degrees would be n − 1, (n being the number of qubits) and that graph would be more challenging to map onto limited connectivity device topologies, which would result in the insertion of a higher number of additional SWAP gates.Table 1 shows the selected metrics subset and how they relate to the quantum circuit mapping process. We noticed, however, that a large amount of these metrics are correlated, i.e., they scale in the same manner.Therefore, the parameter space was reduced by using a Pearson correlation matrix as shown in Fig. 4 (-1/1 meaning maximally-correlated, 0 meaning not correlated) [44].For instance, note that a minimal node degree of a graph strongly relates to maximal clique and edge connectivity, so in that case just using one of the parameters, instead of all three, is sufficient.This method allowed us to reduce our previous metric set to: average shortest path (average hopcount), maximal and minimal node degree, and adjacency matrix (interaction graph edge-weight distribution) standard deviation.These metrics and the common circuit parameters can be used to cluster quantum circuits.It is expected that quantum algorithms with similar properties should show similar performance when run on specific chips using a given mapping strategy. Clustering benchmarks outcomes and evaluation As mentioned earlier, one of our goals is to find structural similarities among quantum circuits and create some sort of 'circuit families', whose elements (quantum circuits) will show similar compilation behavior and require similar hardware resources.The two criteria we have used for clustering benchmarks are properties based on circuit size and qubit interaction graph.Note that we performed a two-step clustering: circuits were first clustered based on size parameters (number of qubits and gates and percentage of two-qubit gates) and then on qubit interaction graph metrics.The reason behind this was to avoid the former to become the most significant criteria of our clustering algorithm.Fig. 5 shows the five clusters (different colors) in which a set of 300 selected benchmarks (Sec.4) have been divided by using the kmeans [45] , hence the hopcount of the path The larger the average hopcount between the nodes the less connected the graph.Simpler interaction graph is easier to map. Diameter Maximum hopcount in the graph or the longest shortest path. For graphs with same number of nodes: the larger the diameter the simpler and less connected the graph.Easier to map. Persistence Smallest number of links whose removal disconnects the graph. The smaller the persistence the less connected the graph (also with lower link weights).Makes the quantum circuit easier to map.Betweenness / Central point of dominance Number of shortest paths between nodes that traverse some node or edge./ Maximum betweenness of any point in the graph (ranges from 0 for complete graphs to 1 for star shaped graphs).The betweenness of node or link k is: , where is the number of shortest paths going between i and j. Values near 0 or 1 are undesirable from the perspective of quantum circuit mapping.0 reports a graph too much connected, 1 indicates one qubit being involved in all gates, making the circuit hard to parallelize. Maximal and Minimal degree Maximum and minimum value of degree.Degree d represents the number of nodes to which some node n is connected: The lower the minimal and maximal degree, the smaller the interaction of a qubit.This makes the quantum circuit easier to map.Trade-off: A large variance means some specific pairs of qubits interact much more than others and there is less additional movement involved. Table 1 Selected metrics for characterizing interaction graphs and their relation to the quantum circuit mapping. plot.The x-axes contains a list of three different parameters with their values shown in y-axes. Each of the five size-related clusters can then be further divided into sub-clusters based on previously explained graph parameters: average shortest path length, maximal and minimal degree and adjacency matrix standard deviation.In this case, we have again selected the kmeans algorithm among several others by evaluating different methods and parameter setups with the silhouette coefficient method [46].In Figure 6, an example when one of the sizeparameters-based clusters (cluster 0 from Fig. 5) is divided into sub-clusters based on the interaction graph parameters.It is also pretty straightforward for additional future circuits to be assigned to a specific cluster (size-and interaction graph-based) as each of the clusters and sub-clusters covers the specific range of combinations of parameters (e.g.cluster 4 in Fig. 5 covers benchmarks with less than 25% percentage of two-qubit gates, cluster 3 in Fig. 6 covers the highest minimal degree values (over 6)).Those circuits should then have similar expected fidelity and gate overhead outcomes as the other circuits in that cluster.How exactly do the mapping performance metrics correlate with Fig. 6 Sub-clustering of quantum algorithms of cluster 0 (Fig. 5) based on interaction graph parameters. our clusters from Fig. 6, and the possible reason for that will be described in the next sections. Experimental setup This section describes all the necessary elements for performing our experiments: i) our newly created benchmarks collection [47] and a subset used in this paper; ii) the OpenQL compiler with its Qmap mapper [12] and Surface-97 platform, IBM Rochester and Aspen-16 configuration files and iii) chosen set of metrics for evaluating the performance of the quantum circuit mapping technique. Quantum benchmarks collection and classification The fast development of quantum computing systems dictates the necessity for an all-including and standardized benchmark suite that can serve to test quantum devices as well as compilation techniques, and in general, any part(s) of the full stack.To address this issue, we collected various types of quantum circuits used as benchmarks from a large number of sources [19,20,[48][49][50][51][52][53][54][55][56][57][58][59][60] written in and translated to different available high-and low-level languages.An overview of our open-source benchmark suite called QBench [47] is shown in Fig. 7. Benchmarks are first divided into two high-level groups: real vs.synthetic quantum circuits.The first ones are then further split into two categories depending on whether they are based on quantum algorithms or are simple reversible arithmetic circuits.In the second group, we can find three different subgroups based on how they are generated.According to [61], currently used benchmarks based on real algorithms (QFT, search algorithms, applicationbased algorithms) are the ones that are of the highest importance when measuring the performance of all future quantum systems as they are scalable, meaningful and can show the advantage in quantum systems comparing to classical counterparts [16].For the current NISQ era, however, there is a need for benchmark libraries like RevLib [60] that are within the domain of reversible and quantum circuit design.Synthetic benchmarks represent the group of randomly generated quantum circuits, which provide a larger variety in terms of their parameters (e.g., number of qubits, gates, two-qubit gate ratio, circuit depth), and are mainly used to test the performance of quantum devices and explore their computational power to the fullest.For this paper, we mainly focused on: i) randomly generated quantum circuits that are created by uniformly randomly choosing single-and two-qubit gates from a predefined set and then applying them on arbitrarily chosen qubits or qubit pairs in the circuit [52]; ii) QUEKO circuits [49], which are designed to be optimal for specific devices (e.g., with optimal depth) and iii) Quantum volume square circuit [62] that is used in general for benchmarking quantum system architectures.A summary of all the real-algorithm-based or synthetic circuits that are part of our benchmark set can be found in [47]. Benchmarks in our set are also classified based on their size (large-, middleand small-scale and parameterized ones) and on the higher-or lower-level language they are written in [47].Note that a parameterized (scalable) version of the circuits allows the creation of new circuits of a desired size, which will be very meaningful for future quantum systems [16].Furthermore, different translators from one quantum language to another, interaction graphs and interaction graph-based profiling are also part of this benchmarks suite. For our experiments, we selected a subset of 300 benchmarks from QBench covering different types (previously described in this section) and qubit number ranges (2-1281 qubits for clustering, 3-54 qubits for mapping experiments). Note that this benchmark set is to become open source not only for other researchers to use it for the future development of quantum systems, but also for others to participate in its future extensions.There will always be new benchmarks that can be added or quantum languages to translate the current benchmarks to, as we are in the era where we witness a continuous development of new quantum algorithms, compilers, simulators, and programming languages. Quantum compiler and targeted quantum devices To analyze how the previously shown clusters of circuits (Sec.3) relates with their after-mapping outcomes, we compiled the 300 selected quantum circuits using as target quantum processor an extended 97-qubit version of the Surface-17 chip (like in Fig. 8(a)).Surface-17 is a quantum processor with a surface code architecture [12], designed to be easily scalable.The device characteristics and all its constraints, are included in a configuration file, which is then used as input for the compiler OpenQL [43].The configuration file of our chosen back-end includes information like error rates, primitive gate set, gatedecomposition rules and processor qubit topology/connectivity.In addition to this, and in order to compare the performance of the mapper for different groups of circuits, we performed the same experiments for two more quantum processors: the IBM Rochester and the Rigetti 16q-Aspen chips that are shown in Figs.8(b) and 8(c), respectively.We selected these device configurations because they are currently commonly used in other research on quantum circuit mapping and provide realistic and different connectivity patterns in their coupling graphs.Note that in our experiments we do not execute the quantum circuits on actual devices but instead they are just mapped into the different quantum processors; that is, their hardware constraints are considered in the compiling process.At the core of the OpenQL compiler is its Qmap mapper, which has many options and strategies allowing to create a sort of custom-made compilation technique.The Qmap quantum circuit mapper considers several types of hardware constraints: limited connectivity, primitive gate set and restrictions derived from classical control electronics.It supports several options for circuit optimization, routing, initial placement as well as scheduling.In addition, it outputs different circuit mapping performance metrics such as the number of additional gates and circuit latency.The routing strategy we opted for was MinExtend [12], which, among other features, includes looking back to previously mapped gates and strives to minimally extend the latency of the circuit.It also includes different but common gate transformation and optimization strategies such as gate cancellation or commutation. Metrics The most commonly used metrics for quantum circuit mapper evaluations are the number of added SWAPs, circuit depth and fidelity/reliability.In our case, we have used the additional gates and extended depth information retrieved from the compiler to calculate the following metrics: 1. Gate overhead is calculated as: , where G bef ore and G af ter represent the number of gates before and after compilation.2. Latency overhead is defined as: , where L bef ore and L af ter represent the circuit latency before and after compilation.Latency is calculated as the number of cycles of the circuit, which also considers variations in gate duration, making it different from circuit depth in which all gates are considered to take one time-step.3. Circuit fidelity is defined as the product of error rates of the gates in the circuit.When mapping a circuit, the main goal is to maximize this metric [66,67].We assumed that all one-qubit and two-qubit gates have the same error rates, respectively, for which we used average values of the Starmon-5 chip [68].4. Fidelity decrease is calculated as: , where F bef ore and F af ter represent the circuit fidelity before and after compilation. In the following section, we will discuss the relation of the structural parameters of circuits with the above-stated obtained metrics after mapping them into the Surface-97, IBM Rochester and Rigetti Aspen-16 devices. Mapping the circuits to Surface-97 chip architecture In this section, we evaluate and compare the mapping outcomes of our selected circuits and analyze how the circuit parameters impact the results.Additionally, we compare the performance of different clusters of circuits when using the same mapping technique and processor design (Surface-97). As previously shown in Sec. 2 (Fig. 2), the gate overhead and circuit fidelity decrease is, on average, higher for our type of synthetic (randomly generated) circuits than for those based on real algorithms, even when they are in the same range of size. 1 Subsequent to this, we unveil how size-related parameters: number of qubits, number of gates and two-qubit gate percentage relate to gate overhead and fidelity decrease, respectively as shown in Figure 10.Each point in the graphs represents a benchmark mapped to the Surface-97 processor and 1 The details on how much the fidelity dropped for each benchmark and how much it differs between the two groups are shown in Fig. 18 in Appendices.Furthermore, as shown in Fig. 9, these two groups of circuits (real and synthetic) are further divided into a total of four differentlystructured groups that include: randomly generated circuits, QUEKO benchmarks, quantum algorithm-based circuits, and reversible arithmetic circuits.Note in Fig. 9 the difference between these groups in terms of three defined mapping performance metrics.Reversible arithmetic circuits showed on average the lowest gate overhead (∼ 120%) and therefore decrease in fidelity.Randomly generated circuits have on average the best latency overhead (∼ 88%).To give an example, QUEKO circuits show an average gate overhead of ∼ 348%, latency overhead of ∼ 153% and fidelity decrease of nearly 100%.This all clearly shows the importance of including the structure of the quantum circuit in the mapping process, and leads us to using that information to our advantage when choosing an appropriate pair of device and mapping technique. just like in Fig. 9, different groups of benchmarks are shown using different symbols and in the same way.In this case, we only considered circuits with up to 500 gates, as all those above that threshold had negligible fidelity even before mapping.Note that these three mentioned parameters are correlated with the mapping results of the circuits on the chip: the closer the points in graphs are to 0 in all axes simultaneously, the lower the overhead and fidelity decrease.Another point that can be made from these figures is that synthetic circuits (QUEKO and random circuits) perform in this setup, on average, worse than the algorithm-based circuits in terms of after-mapping fidelity and gate overhead (just like in Fig. 9).We have noticed earlier (Sec.2) that the size of a circuit, even though an important feature, is not the only reason why some circuits have lower aftermapping overheads than others.Fig. 11 shows how the parameters minimal degree, maximal degree, and average shortest path of the interaction graph influence fidelity and gate overhead of circuits.As observed before, the closer the points in graphs are to 0 in all axes simultaneously, the lower the overhead and fidelity decrease.The graph shows a strong correlation of both the increase in gate overhead (Fig. 11(a)) and fidelity decrease (Fig. 11(b)) with the increase in maximal and minimal node degree and average shortest path.2D cuts of Fig. 11 are shown in Fig. 12 for a better visualization.The following observations can be made: 1) the higher all three circuit parameters, average shortest path, minimal and maximal node degree are simultaneous, the higher the gate overhead (Fig. 12(a)) and fidelity decrease 12(b)).This means the fidelity is the highest and overhead the lowest if all three circuit parameters are close to 0. 2) Some patterns for circuits belonging to the same group can be observed based on how they are created.For instance, QUEKO circuits (squares) have a high average shortest path (∼ 3), random circuits (hexagons) have a high average node degree (∼ 8), whereas RevLib and algorithm-based circuits (x in graph) have on average low values of the same parameters (∼ 1.5 for average shortest path and ∼ 4.5 for node degree).In Sec. 3, quantum circuits have been clustered based on size and interaction graph parameters.In Fig. 13, we can see how the clusters based on interaction graph similarity (example shown in Fig. 6) relate to the mapping performance metrics gate overhead, latency overhead, and fidelity decrease.As mentioned in Sec. 4, the lower these metrics are, the better the mapping performance.One can notice that circuits belonging to Cluster 0 outperform other circuits in terms of gate overhead and fidelity decrease (up to 200% for gate overhead, and an average of ∼ 89% for fidelity decrease), whereas clusters 3 and 4 show the best performance in terms of latency (up to ∼ 150%).What we can further conclude when comparing Fig. 9 and Fig. 13(a), is that clusters mostly consist of benchmarks of the same type: cluster 0 mostly has real circuits, clusters 3 random ones, and cluster 2 QUEKO circuits.That shows, for instance, that real quantum circuits, especially those from cluster 0, present some pattern in the structure that is easier to map without requiring too many additional gates.Finally, Fig. 13(b), which represents a 2D cut of Fig. 13(a), clearly shows differences in the range of gate and latency overhead for different clusters.For instance, clusters 3 and 4 have almost constant circuit latency overhead, on average lower than for other clusters, whereas circuits in cluster 0 have low and similar gate overhead.Gate overhead values of cluster 2 scale linearly with latency overhead. Quantum chip topology as one rationale behind results To further look into the reasoning behind the relation between quantum circuit parameters and mapping performance metrics, we first into the device topology.Thus, we map the same groups of circuits on two additional quantum platforms.: the IBM Rochester and Aspen-16 quantum devices (Fig. 8). The outcomes are shown in Figures 14 and 15.Fig. 15 showcases detailed information on how much each structural parameter influences the three mapping performance metrics: gate overhead, latency overhead, and fidelity decrease for all three device configurations.In Fig. 19 (see Appendix), additional details can be found. From the figures, we can derive the following: i) Different groups of benchmarks based on their origin and structure perform differently when executed on different device topologies.The main value of the figures comes from the fact that we can clearly choose a preferred quantum processor topology for each of the benchmark groups (e.g., Surface-97 is preferred for arithmetic reversible circuits, whereas IBM Rochester might be chosen for random ones, as shown in Figures 9 and 14).ii) The impact of structural parameters on the results varies depending on the topology.For example, in the case of the two new topologies, the number of qubits was not as strongly correlated with gate overhead, whereas the degree of the graph played a more significant role.The correlation matrix shown in Figure 15 highlights that certain parameters are more relevant for specific quantum devices.For the IBM Rochester device, the most important parameter for gate overhead is the two-qubit gate percentage, whereas for Aspen-16 is the maximal degree.The most important parameters for fidelity decrease of both devices are the maximal and minimal degree of the qubit interaction graph, the number of qubits and gates and the two-qubit gate percentage.In contrast, for the Surface-97 device, the most important parameters for gate overhead are the number of qubits and the two-qubit gate percentage, while the most important parameters for fidelity decrease are the number of qubits and the maximal degree of the qubit interaction graph.Latency overhead does not appear to be related to these structural parameters, so we will investigate this metric further in future work with other parameters.These observations suggest that: • Interaction graph parameters are more relevant for the mapping outcomes of Aspen-16 and IBM Rochester devices than for Surface-97.We can see that the majority of structural parameters are highly correlated with the circuit fidelity decrease.The main reasoning behind this is that these processors have much less connected coupling graphs; in other words, the sparser the coupling graph, the strongest the correlation with the interaction graph parameters.In our case, Aspen-16 has the most restricted coupling graph connectivity, and consequently, its mapping metrics have the highest correlation with interaction graph properties.• Two-qubit gate percentage, as expected, shows a very high correlation with the gate overhead metric regardless of the device.Other size-related parameters (number of qubits and gates) are highly correlated with the fidelity decrease of Aspen-16 and IBM Rochester devices due to again limited connectivity of their coupling graphs as well as smaller device size.On the other hand, the number of qubits only correlates with the gate overhead of Surface-97, which can be attributed to the fact it is a much larger device where we could run much bigger and more complex circuits that would then lead to inevitably long routing paths between at least some of the qubits. iii) The two new topologies used for these experiments have quite similar structures (just in different scales of qubit range), and consequently, experiments showcased similar patterns.In future work, we plan to expand our analysis by including additional device topologies.iv) In cases where there is no correlation between interaction graph parameters and certain results (such as latency overhead and minimal degree), it suggests that other structural parameters may have played a more significant role.In our future work, we plan to investigate additional parameters such as gate-dependency critical paths and parallelism, which are discussed in Sec. 6.Similar findings were also observed in a previous study [16], demonstrating differences between topologies. To further investigate the benchmark cluster-device relationship, we continued by observing the circuits belonging to the same clusters.We noticed that (Fig. 16): cluster 0 consists of sparse, low-degree graphs and mostly RevLib circuits; cluster 1 is composed of circuits of a very large standard deviation of weight distribution; cluster 2 includes grid-like-shaped circuits with mostly QUEKO benchmarks; cluster 3 has the densest graphs with highest node degree, mostly consisting of randomly generated circuits; and finally, cluster 4 contains circuits with large average shortest path, mostly QUEKO circuits based on some existing algorithms [49]. As expected, the sparse graphs of low node degree in Cluster 0, which are easier to map to the 2D-grid-resembling qubit topology, required the lowest amount of additional SWAPs, but due to specific, algorithm-based structure, could not be well optimized in terms of depth (more difficult to parallelize operations).Cluster 0 is the only cluster with circuits whose fidelity did not drop 100% Fig. 16 Qubit interaction graphs for circuits belonging to cluster 0 (a) and to cluster 3 (b).On the other hand, the 2D-grid qubit topology, which is the most common state-of-the-art for quantum chips, could not handle well the dense graphs belonging to cluster 3, most of which are random circuits.However, they did perform fine in terms of their latency.What is also interesting, based on these outcomes, is that having, for instance, high average shortest path (like circuits in cluster 4), leads to low latency overhead -as explained in Sec.5.1, which means that the circuit depth was not extended so much.That was as expected, considering that it means that those circuits are much less connected and easier to parallelize. Further, we have also analyzed the relationship between different circuit clusters and the mapping performance metrics for the experiments performed with the latter two quantum devices, the 53q Rochester and the 16q Aspen processor (see Fig. 17).This time we clearly see different outcomes.For instance, cluster 0 is not anymore outperforming the others in terms of gate overheadcluster 4 shows the lowest gate overhead of ∼ 12%; cluster 3 fluctuates much more in terms of latency -it goes up to ∼ 450% instead of the previous ∼ 150% ; and cluster 4 is doing way better in terms of fidelity decrease-∼ 90% instead of previous ∼ 100%.This is more evident for the Rochester device as the number of circuits included is significantly larger.As 16q-Aspen is on a smaller scale (lower number of qubits) similar to Rochester device in terms of connectivity, we also notice that they have similarly distributed clusters regarding mapping metrics.The data points in Fig. 17(b) could even be a subset of those in Fig. 17(a).This outcome means that other devices with similar topology and higher numbers of qubits would still show similar patterns. We discuss other possible reasoning behind the results in Future work section. Discussion and future work In Sec. 3, we mentioned that for completing the description of the structure of quantum circuits, in addition to the interaction graph, we also require gate dependency graph properties.Gate dependency graphs can give insight into how a circuit evolves in time.The critical path within the graph is the most relevant property as it is related to the parallelization degree of the gates, which directly influences the circuit depth.This would also help to explore the oracles or other patterns and repetitions within the circuit.In addition to gate dependency graphs, properties like the amount of parallelism in the circuit (gate density), measurement, and idle gates are influencing the success rate of the circuit a lot [16]. In addition to this, we must not underestimate the role of the mapping technique in these outcomes.For example, including features like look-ahead/back approaches or optimal initial qubit placement would probably have a stronger influence in terms of mapping results when used on circuits with already predefined, steady, and repetitive structures.To verify this assumption, we plan to compare the performance of quantum circuits when using different types of mappers and optimization properties to investigate the mapper-circuit relationship in contrast to the device-circuit relationship demonstrated in this paper.That could then lead to providing guidelines for designing and optimizing algorithm-aware mapping techniques.To this purpose, structured design space exploration methodologies can be used as pointed out in [33]. To conclude, in our future work, we would like to explore further: i) other structural parameters of quantum circuits based on gate dependency graphs such as critical path, the density of gates per layer, and amount of measurement and idle gates.With this, we will ensure to encapsulate all structural perspectives of quantum circuits when performing benchmark clustering and profiling; ii) how these observed patterns (with current parameters and additional ones) can help us to predict the mapping performance of new circuit samples assigned to our clusters, without actually running them on the device; iii) how exactly the interaction graph and coupling graph similarity relate to the mapping result; and iv) investigate a relationship between interaction graphs and gate dependencies with the chosen mapping technique and to which extend that affects the circuit mapping performance on-chip.For this, we will include more compiling options when performing comparisons.This insight into a circuit structure could help us compare and improve currently existing mapping techniques and enable us to have algorithm-driven mappers and quantum devices. Conclusion Current quantum devices are still bounded by size and noise and can only handle small and simple quantum algorithms.To execute quantum algorithms, expressed as quantum circuits, on these error-prone and resource-constrained devices, they need to be adapted to overcome those limitations and therefore prevent additional errors.That process is referred to as the mapping of quantum circuits and represents a complex optimization problem that is dependent on both, processor and algorithm properties.In addition to hardware properties, in this paper, we have analyzed how the structure of quantum circuits affects their mapping performance.Our selected quantum circuits were characterized in terms of not only standard parameters, such as the number of qubits and gates and percentage of 2-qubit gates, but also in terms of their interaction graph (i.e., graph theory-based) parameters that include average shortest path, minimal and maximal node degree, and standard deviation of the edge-weight distribution.Our results show a strong correlation between these parameters and circuit mapping metrics: gate overhead, latency overhead, and fidelity decrease increased with the increase in all the chosen parameters.The effect of these parameters varies across different devices and metrics.For example, the degree parameter has a larger impact on fidelity decrease for the IBM Rochester device than for the Surface-97 device.From these findings, we can identify the preferred devices for an algorithm with specific individual metrics.Furthermore, after clustering the circuits based on mentioned parameters, we found patterns in mapping performance (in terms of the three mentioned metrics) of the circuits belonging to the same cluster, when mapped using the same technique on the same device.For instance, clusters with simpler, low nodedegree graphs showed better performance when targeting a 2D-grid topology regarding gate overhead, whereas clusters consisting of complex and dense circuits outperformed others in latency.On the other hand, different performance results were noted when running the same groups of circuits on two other less-connected devices: size parameters like the number of qubits were far less relevant, and synthetic circuits outperformed real ones (which was not the case for Surface-97), and finally, the correlation between clusters of benchmarks and mapping results was unlike to the previously obtained ones.It was also shown that the way circuits were created is very related to their structure and impacts the results (e.g., if they were uniformly randomly generated circuits), as those circuits were in most cases grouped in the same clusters.Finally, we could see how the clusters scale with different mapping metrics.For instance, in one of the clusters, gate overhead scales linearly with latency overhead; in another, gate overhead is constantly within a specific range regardless of the increase in latency. The proposed method and current findings will help to enhance circuit mapping techniques by including information about the structure of the circuit as well as to have a deeper understanding on the disparity of the observed outcomes when executing different quantum algorithms.In addition, structural parameters of circuits could be used to predict their fidelity decrease and gate and latency overhead for some specific processor and compilation technique without running them on actual devices.This could help to analyze and perform a design space exploration as well as codesign of current compilers, quantum processors and quantum applications.Ultimately, this process contributes to the development of application-specific quantum systems, where algorithms will be run with higher performances. Quantum circuits are also used as benchmarks for evaluating mapping and quantum processors.However, the quantum community still does not agree on one benchmark set used, which resulted in an overwhelming amount of sources of quantum circuits.In this work, we have created a soon-to-be open-sourced easy-to-use benchmark collection having benchmarks from various sources cataloged in folders based on how they are implemented (e.g., based on a real algorithm, random, application-based), the language they are written in, and their size.The set also contains various scripts for translating circuits from one language to another, circuit interaction graphs, and profiling results, as described in this paper.We hope this collection will be useful for testing new quantum processors, updated regularly by the research community to keep up with the new technologies, compilers, programming languages, and most importantly applications, and eliminate the over-the-top amount of benchmark sources. Acknowledgements The authors sincerely appreciate the contribution of Nikiforos Paraskevopoulos in creating the benchmark collection and its documentation, as well as scientific discussions with Prof. Eduard Alarcon (UPC).MB and SF would also like to acknowledge funding from Intel Corporation.This work has been partially supported by the Spanish Ministerio de Ciencia e Innovación and European ERDF under grant PID2021-123627OB-C51 and by the QuantERA grant EQUIP, by the Ministerio de Ciencia e Innovación and Agencia Estatal 10 Appendices Fig. 18 Fidelity decrease for real circuits (a) and synthetically generated ones (b).In this figure we included only the benchmarks whose fidelity was higher than 10% to begin with. Fig. 1 Fig. 1 Running a quantum circuit on a 7-qubit quantum processor.(a) Interaction graph G i (V i , E i ) of the circuit shown below; Nodes V i represent virtual qubits, and edges E i show interactions between qubits (i.e., 2-qubit gates).(b,c) The chip's coupling graph Gc(Vc, Ec); Nodes Vc represent physical qubits, edges Ec show connections on the chip (i.e., possible twoqubit interactions).(d) Circuit qubits (qi ∈ V i ) are mapped onto physical qubits (Qi ∈ Vc).(e) An extra SWAP gate is required to be able to perform all CNOT gates. Fig. 2 Fig.2(a) Circuit fidelity vs. the number of gates.(b) Gate overhead (%) and decrease in fidelity.Synthetically generated circuits marked with orange circles and real ones (i.e., quantum algorithms and routines) with blue squares.Here, only circuits with up to 500 gates were used. Clique / Maximal clique Subset of nodes in which all elements are directly connected./ The largest clique in a graph.The smaller the number of nodes in the largest clique, the smaller the amount of fully connected parts of the graph.Worst case: graph is fullyconnected, making quantum circuit mapping more difficult.Clustering coefficient Measures cliquishness of neighbourhood.Between 0 and 1, where 1 is fully-connected graph: Worst case: Interaction graph is fully-connected, i.e., value of 1. Vertex/edge connectivity (Reliability) Number of removed nodes/edges that disconnects the graph.The lower the reliability of edges and nodes, the less connected the graph.This consequently leads to an easier qubit mapping process.Adjacency matrix / Max and minimal value / Weight distribution / Mean value / Standard deviation / Variance An adjacency matrix is a square matrix used for graph representation.It shows which nodes are connected and with how many edges. Fig. 4 Fig. 4 Heatmap of a Pearson correlation matrix for quantum circuit and interaction graph metrics selected for mapping. Fig. 5 Fig. 5 Clustering of quantum algorithms based on size-related parameters. Fig. 13 Fig. 13 Relation of clusters of circuits (that are shown in Fig. 6) with the parameters of their interaction graphs: a) Gate and latency overhead and fidelity decrease and b) Gate and latency overhead Fig. 14 Fig. 15 Fig. 14 Results of the circuit compilation when mapping different quantum circuits (Random, QUEKO, Reversible arithmetic circuits -RevLib, Quantum-algorithm based circuits) to the IBM Rochester (a) and Aspen-16 (b) device toplogies using the MinExtend mapper Fig. 17 Fig. 17 Quantum circuit mapping metrics vs. clusters of quantum circuits when targeting IBM Rochester (a) and Aspen-16 (b) topologies. Fig. 19 Fig. 19 Results of the circuit compilation when mapping different quantum circuits (Random, QUEKO, Reversible arithmetic circuits, Quantum-algorithm based circuits) to the IBM Rochester (left) and Aspen 16q (right) device toplogies using the MinExtend mapper in terms of: a) and b) Gate overhead and size parameters; c) and d) Fidelity decrease and size parameters; e) Fidelity decrease and IG parameters and f) Gate overhead and IG parameters. Table 2 Mapping overhead results containing a sample of used quantum circuits with their properties for Surface-97 device Table 3 Mapping overhead results containing a sample of used quantum circuits with their properties for Aspen-16 device Table 4 Mapping overhead-based results containing a sample of used quantum circuits with their properties for IBM Rochester device
12,762.4
2022-12-13T00:00:00.000
[ "Physics", "Computer Science", "Engineering" ]
Orbit Growth of Periodic-Finite-Type Shifts via Artin–Mazur Zeta Function : The prime orbit and Mertens’ orbit counting functions describe the growth of closed orbits in a discrete dynamical system in a certain way. In this paper, we prove the asymptotic behavior of these functions for a periodic-finite-type shift. The proof relies on the meromorphic extension of its Artin–Mazur zeta function. Introduction Let (X, T) be a discrete dynamical system, where X is a topological space and T : X → X is a continuous map. For a point a ∈ X, the orbit of a is the set The point a is said to be periodic with period n ∈ N if T n (a) = a. Furthermore, if T n (a) = a but T k (a) = a for any k ∈ {1, 2, . . . , n − 1}, then a has least period n. In this case, its orbit τ(a) = {a, T(a), T 2 (a), . . . , T n−1 (a)} is finite. Such orbit is called a (prime) closed orbit of (least) period |τ(a)| = n. Some counting functions had been introduced to describe the growth of the closed orbits in a system. The functions were inspired by the counting functions for primes in number theory. They are defined as where τ is the closed orbit, h is the topological entropy of the system and x ∈ N. The functions are well-defined if the number of closed orbits for each period is finite. The idea of counting closed orbits in this way arises as a dynamical analogue to counting primes in number theory. Specifically, the famous Prime Number Theorem and Mertens' Theorem (see [1]) tell us about the asymptotic behaviors of certain counting functions for primes, which are as follows: where x ∈ N, p runs through primes, γ is the Euler-Mascheroni constant and M is the Meissel-Mertens constant. Motivated by these results, we are interested to determine similar results for our counting functions for a given discrete system. The earliest works on this idea were done by Parry and Pollicott (see [2,3]) on shifts of finite type and their suspensions. It was shown that for a mixing shift of finite type with topological entropy h 1 > 0, Sharp [4] obtained the asymptotic behaviors of the counting functions for Axiom A flows. However, similar results can be deduced for a mixing shift of finite type, which are where β 1 and C 1 are some positive constants. Similar results had been obtained for toral automorphisms. Waddington [5] proved that for a quasihyperbolic toral automorphism with topological entropy h 2 > 0, for some finite subset U of unit circle S 1 and a function K : U → Z. Specifically, for ergodic toral automorphism, Noorani [6] proved further that where m is some positive integer and β 2 is some positive constant. The proofs for the results above depend on a generating function for the number of periodic points, which is called Artin-Mazur zeta function [7]. For a system (X, T), its Artin-Mazur zeta function is defined as where F(n) is the number of periodic points of period n and z ∈ C. Furthermore, the zeta function can be expressed in terms of closed orbits as From the formula for radius of convergence, if the sequence n F(n) ∞ n=1 is bounded, then the zeta function has a positive radius of convergence. For each system above, the zeta function has a non-vanishing meromorphic extension beyond its radius of convergence. Based on the proofs, this property leads to the asymptotic behaviors of the counting functions through some combinatorial calculation. It turns out that this approach can be applied onto any system to obtain similar results on its orbit growth, as long as its zeta function has the mentioned property. However, this approach of using zeta function may not be feasible for certain systems, for example, when the closed form of its zeta function is not readily available, or the zeta function itself is sophisticated. In recent years, there are other approaches used to obtain the orbit growth for some systems. Alsharari et al. [8] used estimates on the number of periodic points of Motzkin shift over R pairs of matching symbols and S neutral symbols to obtain that for the system. In fact, if S = 0 in the above, we obtain the orbit growth for Dyck shift over R pairs of matching symbols (see [9]). Later, Akhatkulov et al. [10] obtained sharper results for the Dyck shift, which are where Λ is some positive constant and (x) ∼ 1 x . There are other approaches to obtain the orbit growth of a system, such as by counting in orbit monoids [11] and using orbit Dirichlet series on some algebraic systems [12,13]. However, we will not delve deeper into these topics since our focus here is on the approach via zeta function. The results shown above are enough to demonstrate the progress in this research interest in recent years. As a supplementary, interested readers may refer to our survey in [14] and the references therein for more exposure on the topic of orbit counting in discrete dynamical systems. Béal et al. [15] introduced a new type of shift spaces, which are called periodic-finite-type shifts. These are a generalization to shifts of finite type. In fact, its zeta function had been obtained by Manada & Kashyap [16], though it is more sophisticated than for the case of shifts of finite type. Up to date, there is no result published on the orbit growth of periodic-finite-type shifts. Hence, our aim in this paper is to obtain the orbit growth of a periodic-finite-type shift via its zeta function. Since its zeta function had been obtained in [16], it is remained to investigate the properties of its meromorphic extension. In Section 2, we provide a key theorem with proof regarding the orbit growth of a general system via its zeta function. In Section 3, we review some properties of periodic-finite-type shifts, and proceed to determine the orbit growth by investigating their zeta function. Some basic theories on matrices and graphs are required in proving the result on their orbit growth. Orbit Growth via Zeta Function In this section, we prove the asymptotic behaviors of the counting functions π(x), M(x) and M (x) for a general system via its zeta function. The next theorem is inspired from the results in [3][4][5][6]. The proofs in those papers depend on the closed form of the zeta functions of their particular systems. However, it turns out that the proofs work on any zeta function, as long as it satisfies certain analytic properties. Of course, there are certain parts in the proofs needed to be modified to fit our general case, especially on the combinatorial calculation. Because of that, we provide the detailed proof for completeness. Theorem 1. Let (X, T) be a discrete dynamical system with topological entropy h > 0 and Artin-Mazur zeta function ζ(z). Suppose that there exists a function α(z) such that it is analytic and non-zero for |z| < Re −h for some R > 1, and for |z| < e −h for some m, p ∈ N. Then, where γ is Euler-Mascheroni constant, and where C is a positive constant that can be specified as Proof. (a) Since α(z) is analytic and non-zero for |z| < Re −h , (3) implies that ζ(z) is also analytic and non-zero for |z| < e −h . From (1) and (3), for |z| < e −h , Equivalently, for |z| < e −h , Observe that z · α (z) α(z) is analytic for |z| < Re −h and has a power series representation Recall that any analytic function has a unique power series representation (see [17,18]). Therefore, the series ∞ ∑ n=1 c(n)z n is also its power series representation for |z| < Re −h . For any S ∈ (1, R), the series ∞ ∑ n=1 c(n)z n converges for z = Se −h . Therefore, the terms in the series are bounded, i.e., there exists a real M > 0 such that Now, define By using (7), it is obtained that for some constant M . From here, it is easy to deduce that Hence, from (8), it is obtained that Now, we need to relate both ψ(x) and π(x). First, observe that ψ(x) can be expressed in terms of closed orbits as Indeed, this is true by defining k = n|τ| and checking that For a closed orbit τ, the number of times for it to appear in the sum ∑ Define the extension of π(x) over R asπ for y ∈ R. For any real δ > 1, set y ∈ R such that x = δy. So, By combining (10) and (11), it is obtained that Now, we need to show that x ·π(y) ψ(x) → 0 as x → ∞ (and equivalently, y → ∞). First, we will prove thatπ (y) e hδ y is bounded for any real δ > 1. Indeed, since ζ(z) is analytic for |z| < e −h , this implies that ζ(e −hδ ) converges. From (2), Furthermore, from (9), it is easy to see that . Sinceπ (y) e hδ y and e hx ψ(x) are bounded, and Overall, together with (12), it is shown that Please note that δ can be chosen arbitrarily close to 1, so it can be deduced further that The desired result is obtained by combining (9) and (13). (b) We begin with proving the result for M(x). From (1) and (3), it can be shown that for |z| < e −h , where c(n) is given in (5). Recall that if a power series converges in an open disc, then it is analytic in the same disc (see [17,18]). Please note that the series (6) is analytic for |z| < Re −h , and so is the series ∞ ∑ n=1 c(n) n z n by comparison. Since any analytic function is continuous (see [17,18]), it is obtained that Since Recall that for harmonic sum, where γ is the Euler-Mascheroni constant (see [19]). Therefore, (14) can be written as Now, define can be expressed in terms of closed orbits as Indeed, this is true by defining k = n|τ| and checking that Furthermore, for a closed orbit τ, it contributes to the sum ∑ τ,n n|τ|≤x e −hn|τ| n for n ∈ 1, 2, . . . , x |τ| . So, Consider the sum Since e −h < 1, it can be written as From (16) and (17), it is obtained that We will prove that the sum in the last line in (19) converges as x → ∞ by using Riemann-Stieltjes integral with respect toπ(x) (see [20]). We will also use the fact that for some positive constants A and B. This is derived from result in part (a). Now, The expression in the last line in (21) converges as x → ∞, and so is the sum. By using (18), (19) and (21), it is obtained that From (14) and (22), it is deduced that The desired result for M(x) is obtained by applying exponent and arranging terms in (23). Now, we will prove the result for M (x). Using (17) and similar calculation as above, it is obtained that The sum in the last line in (25) converges as x → ∞ by using Riemann-Stieltjes integral with respect toπ(x). Indeed, it can be checked that where A and B are constants in (20), and the integral converges as x → ∞ by integral test. The calculation is very similar to the previous one, so it is omitted here. Overall, from (24) and (25), the sum converges to a positive constant C. By using (23), it is obtained that The desired result for M (x) is obtained by arranging terms in (26). For a given system, these two properties of its ζ(z) can help us to determine the suitable function α(z) and its region of analyticity |z| < Re −h . We will demonstrate this later for a periodic-finite-type shift. Periodic-Finite-Type Shifts In this section, we describe the construction of a periodic-finite-type shift and some important properties such as its graph representation and zeta function. A periodic-finite-type shift is an example of shift spaces (see [21] for details). Construction Let A be a finite set of symbols. Define the shift map σ : A Z → A Z as follows: for x = (x i ) i∈Z ∈ A Z , its image is given by σ(x) = (x i+1 ) i∈Z . The element w = w 1 w 2 . . . w k ∈ A k for some k ∈ N is called a word and the element x = . . . x −2 x −1 x 0 x 1 x 2 . . . ∈ A Z is called a point. w is said to occur in x if there exists j ∈ Z such that x j x j+1 . . . x j+k−1 = w 1 w 2 . . . w k . It is denoted as w ≺ j x. For some t ∈ N, consider a list of finite subsets F 0 , F 1 , . . . , F t−1 ⊂ ∪ ∞ k=1 A k . Define subset Σ ⊆ A Z as follows: x ∈ Σ if and only if there exists r ∈ {0, 1, . . . , t − 1} such that w ≺ s σ r (x) for all w ∈ F s mod t and all s ∈ Z. The pair (Σ, σ Σ ) is called a periodic-finite-type shift of period t. Remark 2. (i) For the sake of simplicity, the restricted map σ Σ will be denoted simply as σ from now on. (ii) For the sake of this paper, we call the integer r in the above definition as the shifting value of x. Please note that r is not necessarily unique for each x. It is known from [16] that given a list of subsets F 0 , F 1 , . . . , F t−1 , we can construct a new list of subsets F 0 , F 1 , . . . , F t−1 that gives the same periodic-finite-type shift such that (i) all words in F 0 have the same length , and (ii) F 1 = F 2 = . . . = F t−1 = ∅. A periodic-finite-type shift defined by the list of subsets with those properties is said to be in its standard form. Without loss of generality, we assume that any periodic-finite-type shift is in its standard form from now on. By definition, a shift of finite type is indeed a periodic-finite-type shift with period 1. Graph Representation A periodic-finite-type shift of period t can be represented by a labeled graph. Specifically, its graph G is a t-partite graph with sets of vertices V 0 , V 1 , . . . , V t−1 such that This is called the Moision-Siegel representation [15]. Recall that a sofic shift is a shift space which can be represented as a labeled graph (see [21] for details). Therefore, a periodic-finite-type shift is indeed a sofic shift. Recall that a graph G is said to be irreducible if for each pair of vertices u and v (not necessarily distinct), there exists a path from u to v. By Perron-Frobenius Theorem [22], its adjacency matrix A(G) has a Perron eigenvalue λ A(G) . Furthermore, a labeled graph G is said to be right-resolving if for each vertex, each outgoing edge has different label. From [21], the sofic shift represented by an irreducible right-resolving graph G has topology entropy h = log λ A(G) . For a periodic-finite-type shift, its Moision-Siegel representation G is right-resolving. Therefore, if G is irreducible, then its topological entropy is Recall that a shift space is said to be irreducible if for any pair of words u and v occurring in some points, either uv occurs in some point or there exists a word w such that uwv occurs in some point. From [21], if a labeled graph representing a sofic shift is irreducible, then the sofic shift itself is irreducible. However, the converse is false. In fact, there exists a periodic-finite-type shift which is irreducible, but its Moision-Siegel representation is not irreducible. This is shown in the following example. Example 1. Consider the periodic-finite-type shift (Σ, σ) constructed from A = {0, 1}, F 0 = {01, 11} and F 1 = ∅. Suppose that the words u and v occur in x ∈ Σ and y ∈ Σ respectively. Let r and s be the shifting value of x and y respectively. Let i be the position of the last symbol of u in x, and j be the position of the first symbol of v in y. Let x − be the infinite string of symbols before u in x, and y + be the infinite string of symbols after v in y. Intuitively, the shifting value indicates whether the even or odd positions in the point are to be checked for the occurrence of forbidden words 01 and 11. If the value if 0, then the even positions are to be checked for those words. This is oppositely true if the value is 1. Furthermore, if the value is 0, then the symbol 1 will not occur at any odd position, since otherwise, the forbidden word 01 or 11 will occur at some even position. This is also oppositely true if the value is 1. With the explanation above, it is easy to observe that if r = i mod 2, then the last symbol of u is either 0 or 1. However, if r = i mod 2, then the last symbol of u must be 0. This is similarly true for the first symbol of v. Overall, we can check for the irreducibility of (Σ, σ) through the following cases: (i) if r = i mod 2 and s = j mod 2, then the word u0v occurs in the point x − u0vy + ; (ii) if r = i mod 2 and s = j mod 2, then the word uv occurs in the point x − uvy + ; (iii) if r = i mod 2 and s = j mod 2, then the word uv occurs in the point x − uvy + ; (iv) if r = i mod 2 and s = j mod 2, then the word u0v occurs in the point x − u0vy + . However, its Moision-Siegel representation is not irreducible because the vertices 10, 11 ∈ V 1 do not have incoming edge. This is shown in Figure 1. Zeta Function Manada and Kashyap [16] obtained the zeta function of a periodic-finite-type shift of period t by using its Moision-Siegel representation. For χ ∈ Ω, let χ * be the shortest word such that χ = (χ * ) k for some k ∈ N. For convenience, denote N χ = k. Let L χ be the length of χ * , and W χ be the number of 1's in χ * . Furthermore, let H χ be a graph constructed as follows: (i) the set of vertices is V 0 = A \ F 0 ; (ii) for u, v ∈ V 0 , there exists an edge from u to v if and only if there is a path of length L χ from u to v in G χ . Let A(G χ ) and A(H χ ) denote the adjacency matrix for G χ and H χ respectively. With the notations above, the zeta function of a periodic-finite-type shift is given by where I is the identity matrix. The zeta function is a rational function, thus has a meromorphic extension to the entire complex plane. Orbit Growth of a Periodic-Finite-Type Shift In this section, we prove the orbit growth of a periodic-finite-type shift of period t by applying Theorem 1 to the zeta function in (27). For this, we need to obtain a region of analyticity beyond the radius of convergence such that its meromorphic extension is non-zero in this region. From now on, we assume that our periodic-finite-type shift has irreducible Moision-Siegel representation G, thus the shift itself is irreducible. Recall that for an irreducible graph, its graph period is the greatest common divisor of the lengths of cycles of any vertex. Since G is a t-partite graph, its graph period must be a multiple of t, i.e., ct for some c ∈ N. For a square matrix A 0 , observe that where µ runs through the eigenvalues of A 0 . Therefore, the zeros and poles of the zeta function in (27) are determined by the eigenvalues of the adjacency matrices A(G χ ) and A(H χ ) for all χ ∈ Ω. Observe that for χ = 10 t−1 ∈ Ω, the graph G χ is indeed the Moision-Siegel representation G. Since N χ = 1 is odd, there is no corresponding graph H χ . Since G is irreducible with graph period ct, Perron-Frobenius Theorem [22] states that the Perron eigenvalue λ A(G) satisfies the following: Therefore, it is obtained that where µ runs through the eigenvalues of A(G). Therefore, the zeta function in (27) has simple poles on the radius λ −1 A(G) , which are contributed by χ = 10 t−1 ∈ Ω. We will show that the zeros and other poles of the zeta function are located beyond the radius λ −1 A(G) . From now on, for a non-negative matrix A 0 , we denote λ A 0 as its spectral radius i.e., the largest modulus of the eigenvalues of A 0 . In the next two lemmas, to avoid trivial case, we only consider a proper periodic-finite-type shift i.e., t ≥ 2. We will compare both graphs G χ and G χ . Denote u (i) j to be a vertex from the set V i in G χ for i ∈ {0, 1, . . . , L χ − 1}, and j is simply the index for position (to be used in the notation of path later). We can use similar notation for G χ as well. Let be a path of length l ∈ N in G χ for some k ∈ {0, 1, . . . , L χ − 1}. Observe that we can associate ρ with a path ρ in G χ of the same length as This association is unique. Therefore, we can say that the set of paths in G χ is embedded into the set of paths in G χ . Let P l (G χ ) be the set of paths of length l in G χ , and similarly P l (G χ ) for G χ . Based on the properties of a shift of finite type constructed from a graph (see [22] for details), it is known that This is similarly true for λ A(G χ ) . Since P l (G χ ) is embedded into P l (G χ ) and so |P l (G χ )| ≤ P l (G χ ) , we conclude that By Perron-Frobenius Theorem, The last two inequalities imply the desired result. Proof. Let G L χ χ be the graph constructed from the matrix A(G χ ) L χ . Observe that H χ is a subgraph of G L χ χ . Using similar argument on the shift of finite type constructed from a graph as in Lemma 1, we obtain that The last inequality and Lemma 1 imply the desired result. Now, we are ready to prove our main theorem. Theorem 2. Let (Σ, σ) be a periodic-finite-type shift with period t ∈ N. Suppose that its Moision-Siegel representation G is irreducible with graph period ct for some c ∈ N. Suppose further that the Perron eigenvalue λ for its adjacency matrix A(G) satisfies λ > 1. Then, where γ is Euler-Mascheroni constant and α(z) is defined as in (28), and where C is a positive constant that can be specified as in (4). Proof. Recall that the topological entropy of our periodic-finite-type shift is h = log λ A(G) (or in this setting, h = log λ). Based on Theorem 1, we need to obtain the function α(z) and a constant R > 1 such that α(z) is analytic and non-zero for |z| < Rλ −1 A(G) . We have the following observations: (i) for χ = 10 t−1 ∈ Ω, the expression det (I − z · A(G)) gives rise to some simple poles at radius λ −1 A(G) , and also other poles at µ −1 for other non-zero eigenvalues µ of A(G). Since |µ| < λ A(G) by definition of Perron eigenvalue, the other poles are located beyond the radius λ −1 A(G) ; (ii) for χ ∈ Ω \ {10 t−1 }, the expression det (I − z · A(G χ )) gives rise to zeros or poles at µ −1 for every non-zero eigenvalue of A(G χ ). However, Lemma 1 implies that these are located beyond the radius λ −1 A(G) ; (iii) for χ ∈ Ω \ {10 t−1 }, observe that where µ runs through the eigenvalues of A(H χ ). This gives rise to zeros at radius µ − 1 Lχ for every eigenvalue of A(H χ ). However, Lemma 2 implies that these zeros are located beyond the radius λ −1 A(G) . Now, set R > 1 to be where z 0 runs through zeros and poles of ζ(z) where |z 0 | > λ −1 A(G) , and define for |z| < Rλ −1 A(G) . Based on our observations above, the closest poles of ζ(z) are located in the radius λ −1 A(G) , and other poles and zeros are beyond this radius. Therefore, α(z) is analytic and non-zero in this region. Since the conditions in Theorem 1 are satisfied, we obtain the orbit growth as desired. Remark 3. In the theorem above, the assumption that λ > 1 is required to ensure that the shift has topological entropy h > 0, as per condition of Theorem 1. The irreducibility of G does not guarantee that λ > 1. For example, the shift of finite type (hence, a periodic-finite-type shift) defined by A = {0, 1} and F 0 = {00, 11} has irreducible Moision-Siegel representation, but its Perron eigenvalue is λ = 1. Conclusions In this paper, we have obtained the orbit growth of a periodic-finite-type shift via its zeta function as shown in Theorem 2. We can also deduce the orbit growth for a mixing shift of finite type (where t = 1 and c = 1) from Theorem 2, and this agrees with the results in [2][3][4]. Furthermore, this approach via zeta function works for any discrete system, as long as the zeta function satisfies the conditions stated in Theorem 1. With our demonstration here, we hope that this approach will be applied to obtain results on orbit growth for other systems in future study.
6,391
2020-05-01T00:00:00.000
[ "Mathematics" ]
Vibroelectronic Properties of Functionalized Single-Walled Carbon Nanotubes and Double-walled Boron Nitride Nanotubes Carbon is the first element in group-IVof the periodic table and has a 1s22s22p2 electronic configuration, in which four valence electrons allow it to form a number of so-called hybri‐ dized atomic orbitals. Carbon atoms in elemental substances bond to each other covalently by the sharing of electron pairs, in which the covalent bonds have directional properties; this in turn provides carbon the capability to form various molecular and crystalline solid struc‐ tures. The nature of the covalent bonds that are formed dictate the varied chemical and physical properties of carbon allotropes. Pure carbon-based materials not only exist as the commonly recognizeddiamond and graphite allotropes, but also more exotic entities such as fullerenes, carbon nanotubes (CNTs), and graphene; these latter allotropes having proven themselves important materials in nanotechnology. Introduction Carbon is the first element in group-IVof the periodic table and has a 1s 2 2s 2 2p 2 electronic configuration, in which four valence electrons allow it to form a number of so-called hybridized atomic orbitals. Carbon atoms in elemental substances bond to each other covalently by the sharing of electron pairs, in which the covalent bonds have directional properties; this in turn provides carbon the capability to form various molecular and crystalline solid structures. The nature of the covalent bonds that are formed dictate the varied chemical and physical properties of carbon allotropes. Pure carbon-based materials not only exist as the commonly recognizeddiamond and graphite allotropes, but also more exotic entities such as fullerenes, carbon nanotubes (CNTs), and graphene; these latter allotropes having proven themselves important materials in nanotechnology. The present chapter deals with single-walled carbon nanotubes (SWNTs), whose unique properties, as suggested above, derive from their distinctive structure. In SWNTs the carbon bonding that exists is akin to that that exists in graphite as opposed to that found in diamond. More specifically, diamond has a coordination number of four, with sp 3 hybridization, while, on the other hand, sp 2 hybridization exists in the planar layers of carbon atoms that give graphite its structure, and in the bonding that leads to the tubular structure of SWNTs. The sp 2 hybridization in graphite links carbon atoms in a two-dimensional (2D) layer of hexagons that lead to each layer in the graphite structure, in the ideal case, forming a planar structure. Each carbon atom contributes 3 electrons to 3 equivalent sigma bonds within the plane and has 1 electron left in the perpendicular p z orbitals; such electrons are delocalized over the entire plane, resulting in a π-electron orbital system that allows the fourth valence electron to essentiallymove freely over the plane. Within the layers, the carbon-carbon bond distance is similar to the bond length in benzene (i.e., the carbon atoms are strongly bound to each other and the carbon-carbon distance is about 0.14 nm), leading to a very large inplane value for Young's modulus. However, the distance between layers (ca., 0.34 nm) is sufficiently large that the layers are bounded to each other mainly by weak, longrange Van der Waals type interaction. The weak interlayer coupling gives graphite the property of a seemingly very soft material, a property that makes graphite suitable for use in pencils and in lubricants. As a result of its intrinsic structure, the electrical conductivity of graphite is directionallydependent. Delocalized π-electrons parallel to the planes essentially experience metallic conduction, while electron mobility perpendicular to the layered planes would typically be much lower, but with possibly significant temperature dependency, thereby imbruing graphite as semiconductor character as well. The directionality of the conductivity translates to a band structure that has a filled valence bandand an empty conduction band separated by an energy gap. These bands, in one picture, would result from bonding and antibonding molecular π-orbitals that can be conceptualized in terms of energy lowering and energy raising combination of the perpendicular p z atomic orbitals. The π-bonding orbitals would be fully occupied while the π-antibonding orbitals would be unoccupied, with the gap being the energy difference between the top and bottom of the respective orbitals. Because of the larger distance between its layers, graphite may form intercalation compounds with added species that act as electron donors, with graphite acting as an electron acceptor, incorporating the donated electrons into the vacant conduction band; or as electron acceptors, where graphite donates electrons from the full valence band. In diamond, it is to be noted that all valence electrons are localized around the carbon atoms, hence, such a structural characteristic has profound effectson its electrical properties, with diamond being an insulator with a band gap around 6 eV. As it is well known, carbon nanotubes can be obtained by rolling up a defined projected area from within the hexagonal lattice of a graphene sheet in a seamless fashion such that all carbon-carbon (C-C) valences are satisfied, and the direction in which the roll up is performed transforms into the circumference of the tube. The projected area is in fact a homomorphic representation of a particular carbon nanotube [48(f-g)]. The roll-up vector is also termed the chiral vector, and is defined as na → 1 + ma → 2 , where a → 1 and a → 2 are the unit vectors of the hexagonal lattice, and n and m are the so-called chiral indices. An infinite number of nanotube geometries are possible, with a specific nanotube characterized by chiral indices (n,m), which, in turn, define the chiral angle θ and tube diameter (d t ); the latter is also de-um molecules occur, which, since encapsulation of a dye quenches strong dye luminescence, allows measurement and analysis of the dye's Raman spectra [30]. Also, L. Alvarez et al. [31] have reported that while infrared spectroscopy (IR) might provide evidence of a significant positive charge transfer for an inserted oligothiophene, Raman spectra evince different behaviors depending on the excitation energy and relationship to the oligomer's (specifically, quaterthiophene) optical absorption energy. For example, at high excitation wavelength (far from the oligomer's resonance), radial breathing modes exhibit a significant blue-shift as a result of the encapsulation effect, while at low excitation wavelength, close to resonance with the oligomer absorption, both the G-band and the low-frequency modes vanish, suggesting a significant charge transfer between the oligomer and the nanotube. CNTs are also widely used in the clinical and research medical arenas. They find application as superior drug delivery media, for health monitoring devices; as biosensing platforms for the treatment of various diseases; in chemical sensor devices, etc. [32,33,34,35]. Functionalized-SWNTs (i.e., f-SWNTs) have been known to increase solubility and permit efficient tumor targeting/drug delivery; prevents SWNTs from being cytotoxic; and possibly altering the functioning of immune cells. Moreover, carbon nanotubes have enhanced solubility when functionalized with lipids that make their movement through the human body easier and reduces the risk of blockage of vital body organ pathways. Also, CNTs exhibit strong optical absorbance in certain spectral windows, such as the NIR (near-infrared); when functionalized within tumor cell with specific binding entities, the nanotubes have allowed the selective destruction of disease (e.g., cancer) cells with NIR in drug delivery applications. More recently, boron nitride nanotubes (BNNTs) can be counted among the modified CNT that have been synthesized [36,37,38]. The electronic properties of boron nitride nanotubes differ from carbon nanotubes: while carbon nanotubes can be either metallic or semiconducting, depending on their chirality and radius [39], all boron nitride nanotubes (BNNTs) are found to be semiconducting materials with a large band gap [40]. And since the band gap is large, the gap energy is only weakly dependent on the diameter, chirality, and the number of walls of a multi-walled tube structure. Moreover, because of their semiconducting character, BNNTs like CNTs themselves are also very interesting materials for application in nanoscale devices, and have been considered alternatives to CNTs [41,42]. Like CNTs the modification of the electronic properties of BNNTs by doping and functionalization is an important avenue for making nanodevices. The doped BNNTs nanotubes may exhibit a dramatic change relative to the pristine nanotube. Furthermore, because of the strong interactions between electrons and holes in BNNTs [43,44], the excitonic effects in BNNTs have proven more important than in CNTs. Bright and dark excitons in BNNTs qualitatively alter the optical response [45]. For a better understanding of the physical and optical properties of nanotubes, quantum mechanical calculations have been extremely helpful. In this chapter, we provide theoretical results on double-walled boron nitride nanotubes (DWBNNTs) and functionalized nanotubes using DFT; this report extends the quantum chemical computational approach that we have used earlier [48(f-g)]. The results of calculations not only indicate the shift in the spectral peak positions of the RBM and G-modes in Raman spectra of DWBNNTs relative to their corresponding isolated SWBNNTs, but also indicatesa charge transfer from the outershell to the inner-shell when DWBNNTs are excited, as discussed in Section 3. Furthermore, the plots of the frequencies of vibrational radial breathing modes (RBM) versus 1/d t for (2n, 0)&(n,0)-DWBNNTs exhibit a strong diameter dependence. Figure 1. A general Perrin-Jablonski diagram for a fluorescent molecule, where S and T stand for singlet and triplet electronic states, respectively. IC and ISC represent "internal conversion" and "intersystem crossing", respectively. For functionalized single-walled carbon nanotubes, we find that there should be a charge transfer process directed from the nanotube to an attached molecule, which is active in optical excitations. More generally, upon irradiation a system can undergo internal conversion (IC) and intersystem crossing (ISC) processes, in addition to photochemical and other photophysical processes. Transient intermediates are likely to form in the IC and ISC radiationless processes, herein referred to as "dark processes," which are not detected using conventional light absorption or emission spectroscopic methods. As seen from the combined Perrin-Jablonski diagram in Figure 1, for a typical molecule the emission of a photon from an electronically excited state to the ground state results in fluorescence in the region of 300 to 1500 nm. Photophysical processes for an isolated molecule occur as a result of transitions between the different internal energy states of that comprise the electronic states. A molecular system in the gas-phase or in the solution phase at room temperature is mostly expected to be in its ground state (S 0 ). The excitation of a molecular system from its ground state to an excited vibroelectronic state by absorption of a photon (occurring within ca. 10 -15 second) is much faster than a emission of the photon from its excited electronic state (S k , k>1) to its ground state (occurring in ca. 10 -8 second). All of the excited molecular systems may not directly return back to their ground state by emission of a photon, S k>0 → S 0 transition, but some of them may return back to their ground states (S 0 ) by internal conversion (CI), for instance, when the molecule is excited into a higher vibroelectronic state (S k >1 ), it may undergo relaxation to the S 1 state (in 10 -12 s) via vibrational coupling between these states before undergoing additional vibrational relaxation and returning to the lowest singlet electronic energy level (S 1 ), referred to as internal conversion. Subsequently, transition from S 1 to S 0 by emission of a photon (fluorescence) occurs. An alternate pathway for a molecule in the lowest energy S 1 state involves intersystem crossing (at rates that can compete with fluorescence) by the molecule into a triplet state T 1 . From T 1 , the molecule can undergo radiative de-excitation via a much slower process, which is known as phosphorescence (T 1 → S 0 transition), such as illustrated by the Perrin-Jablonski diagram given in Figure 1. It is to be noted that fluorescence resonance energy transfer (FRET) can be used to investigate intra-and/or intersystem energy transfer dynamics that might occur as one transitions fromsingle-walled nanotubes (SWNTs) to multi-walled nanotubes (MWNTs),or to the functionalized nanotubes (f-NTs). Such dark intermediates are expected to play crucial roles in IC and ISC processes and thus are fundamental to understanding mechanistic photochemistry of the functionalized-nanotubes and multi-walled nanotubes. We have used time-dependent DFT (i.e., TD-DFT) methods to determine the dark transient structures involved in radiationless processes for functionalized-SWCNTs and DWBNNTs. Also, we have calculated all possible singlet-triplet vertical electronic transitions and discussed these in terms of IC and ISC processes. It is to be noted that CNTs have been shown to exhibit strong optical absorbances in certain spectral windows, such as the NIR (near-infrared). Moreover, when functionalized with tumor cell specific binding entities CNTs have facilitated the selective destruction of disease cells (e.g., cancer cell) in the NIR and play a significant role in drug delivery applications [46]. In the present chapter, we acknowledge the importance of calculating the IR spectra of both functionalized-SWCNTs and DWBNTS. Results and discussion Computational methods: The ground state geometries of single-walled carbon nanotubes (SWCNTs), double-walled carbon nanotubes (DWCNTs), single-walled boron nitride nanotubes (SWBNNTs), and functionalized-SWCNTs were optimized without symmetry restriction on the initial structures. Both structure optimization and vibrational analysis calculations were implemented using DFT with functionals, specifically, B3LYP, in which the exchange functional is of Becke's three parameter type, including gradient correction, and the correlation correction involves the gradient-corrected functional of Lee, Yang and Parr. The basis set of split valence type 6-31G, as contained in the Gaussian 03 software package [47], was used. The results of the calculations did not produce any imaginary frequencies. The vibrational mode descriptions were made on the basis of calculated nuclear displacements using visual inspection of the animated normal modes (using GaussView03) [47], to assess which bond and angle motions dominate the mode dynamics for the nanotube. The DFT method was chosen because it is computationally less demanding than other approaches as regards inclusion of electron correlation. Moreover, in addition to its excellent accuracy and favorable computation expense ratio, the B3LYP calculation of Raman fre-quencies has shown its efficacy in numerous earlier studies performed in this laboratory and by other researchers, often proving itself the most reliable and preferable method for many molecular species of intermediate size, including anions and cations [48]. In our calculations, hydrogen atoms have been placed at the end points of the unit cells. Furthermore, the timedependent density functional theory at TD-B3LYP level were applied to calculate the vertical electronic transitions for the SWCNTs, SWBNNTs and functionalized (7,0)-and (10,0)-SWCNTs. For geometry optimization and calculations of electronic transitions, the 6-31G* basis set was used for sulfur atom (S) and the 6-31G basis set was used for the other atoms involved in the covalently functionalized nanotubes. It is worth nothing that the results of the calculated structural and spectroscopic properties of the double-walled boron nitride nanotubes (DWBNNTs) and the functionalized zigzag single-walled carbon nanotubes (f-(n, 0)-SWCNTs) used in this chapter have been submitted to elsewhere for publication. Structural results Calculated diameters of the (0,n)&(0,2n)-DWCNTs (zigzag double-walled carbon nanotube) and (0,n)&(0,2n)-DWBNNTs (zigzag double-walled boron nitride nanotubes), for n = 6 to 10, were found to decrease for the inner-nanotube and increase for the outer-nanotube, referenced to the corresponding diameter of the zigzag single-wall nanotube ((0,n)-SWNT) which changes with n. A fit to the calculated individual tube diameters for each inner-and outershell of the DWCNTs and DWBNNTs using a functional form that depends inversely on single-walled nanotube's diameter: fit parameters are shown in Eq. 1a-2b A comparison the diameters of the inner-and outer-shells of the DWNTs with their corresponding SWNTs diameters show that the inner-shells diameters decrease and the outershells diameters increased. These predictions explicitly indicate the existence of intertube interactions in DWCNT systems. As seen in Figure 3, the diameter dependence of the curvature energies of the DWCNTs and DWBNNTs referenced to the global energies per hexagon and The results of the calculations suggest that the DWNTs with large diameters can be much more easily formed than those with small diameters. When comparing the formation energy of the DWCNTs with the DWBNNTs, as shown in Figure 3, it can be seen that the formation of the DWBNNTs is favorable to that of DWCNTs due to the relatively strong interactions between the innerand outer-shells in the case of the DWBNNTs. This finding also is supported by the calculated electron density, as discussed below, as well as the relative change in the tube diameters when going from the SWNT to the DWNT, as seen in equations 1-2. Furthermore, our ongoing calculations on the energetically stability of the DWBNNTs as function of the interwall distance (between inner-and outer-shells) indicates that the interwall distance around 0.34 nm is more stable, which are excellent agreement with the experimental observations by J. Cumings [59], which will be published elsewhere. However, at different experimental conditions, the DWBNNTs with small interwall distance such as (0,6)&(0,12)-DWBNNT might be formed at different experimentally conditions. The DWBNNTs with small interwall distance might be more interesting than other, in their optical applications. 0)-DWCNT, the geometry optimization, without any symmetry restriction, predicted ground state geometry has C 2v point group and the electronic state of ground state has singlet-A 1 symmetry. The plotted electron density showed that while first four highest occupied molecular orbitals (from HOMO to HOMO-4, of B 2 , B 1 and 2E 2 symmetries, respectively) involve both the inner-and outer-shell, the HO-MO-5 with the 2E 1 symmetry belongs to outer-shell only. The lowest unoccupied molecular orbital, LUMO (E 1 ) lies about 4.699 eV above the HOMO (B 2 ), and belongs to the inner-shell, while the next higher one (E 1 ) involves not only the inner-and outer-shell (lies 5.521 eV above the HOMO (A 1 )), but also there is a significant sigma-bonding interaction between the inner and outer tubes in the excited state. For the (0,6)&(0,12)-DWCNT, the calculated electron density of (0,6)&(0,12)-DWCNT shows that the first four highest occupied molecular orbitals (from HOMO to HOMO-3, with the A 1u , A 2g and 2E 1g symmetries, respectively) belong to the outer-shell and the next higher occupied molecular orbitals, from HOMO-4 to HO- Physical and Chemical Properties of Carbon Nanotubes 62 MO-24, include both inner-and outer-shells of (0,6)&(0,12)-DWCNT. The lowest unoccupied molecular orbital, LUMO(E 1u ) lies about 0.780 eV above the HOMO(A 1u ) and belongs to the outer-shell, while the next one (with B 2u symmetry) belongs to the inner-shell, and lies 0.849 eV above the HOMO(A 1u ). The calculated electron densities also indicate that an intratube (inner and outer tube) interaction may take place in the excited state, since the LUMO +7(A 2u ), LUMO+8(E 1u ), LUMO+10(E 1g ) and LUMO+15(E 1g ) lie about 2.494, 2.557, 2.563, 3.637 eV above the HOMO(A 1u ), respectively. The intratube σ-bonding interaction in the excited state of the (0,6)&(0,12)-DWBNNTs and DWCNT might lead to a probable intertube charge transfer, which can be observed by a significant change in the tangential modes (TMs) of resonance Raman spectra when the tube excited to its intratube charge transfer state. The TMs may not only provide information about the metallic or semiconducting character of nanotubes, but also about the inner-outer tube (intratube) charge transfer. Indeed, very recently, resonant Raman measurements [49], photoemission measurements, and theoretical calculations have provided evidence of charge transfer between the inner-and outer-shells of DWCNTs. Given such a scenario, small sized-DWCNTs and DWBNNTs might be used as energy conversion systems due to charge transfer between intershells, which might be indicated by changes in of the Raman band intensities upon excitation in resonance with charge transfer between inner-and outer-shells. Zigzag-SWBNNTs: In the low frequency region (<500 cm -1 ), the calculated Raman spectra the (0,n)-SWBNNTs (n=6 to 19) exhibited two Raman bands. One of them is known as the radial-breathing mode (RBM) and other is elliptical deformation mode (EDM). The RBM is an important mode for the characterization and identification of particular nanotubes, especially of their chiralities. The importance of the radial-breathing mode for the characterization of nanotubes derives from the inverse dependence of its frequency on the diameter of the nanotube. As seen in Figs. 5A-B, the radial breathing mode (RBM with A 1g symmetry, ω RBM (A 1g ) ) and other Raman band (elliptical deformation mode (EDM) with E 2g symmetry, ω EDM (E 2g ) ) have frequencies that inversely depend on a nanotube's diameter. A linear fit to the calculate RBM frequency dependence on nanotube diameter is provided; a linear equation, ω RBM (A 1g ) = 48.51 + the DFT within ± 1 cm -1 . However, the offset constant in the linear fitting equation (48.51 cm -1 ) produce significant error for the (0,n)-SWBNNTs with large diameter because the RBM decreases with increasing tube diameter and RBM in the limit of infinite diameter yields to a simple translation of the BN sheet. The RBM frequency should therefore go to zero in this limit. Therefore, a curve fit may be obtained using a cubic equation such as , which reproduces the RBMs within a ± 3 cm -1 error range when, comparing with the calculated Raman spectra of the SWBNNTs from (0,6) to (0,19) using the DFT technique and the RBM goes to zero in the limit of infinite diameter. An analytical expression for the other accompanying calculated low frequency bands (EDM of E 2g symmetry), which has lower frequency than the RBM, the best fit parameters carried out to third order in inverse diameter parameter is given by the equa- , which reproduces exact calculated values of the EDMs. It is noting worth that, without offset constant, fitting equation (linear or high order) reproduces the calculated values of the EDMs within a large error range. The band is labeled as EDM for elliptical deformation, which derives from the predominate motions that define vibrational mode motions, as ascertained with the vibration visualization software mentioned earlier. The results of calculated Raman spectra of the (0,n)-SWBNNTs showed that: 1) the RBM of the frequency dramatically increases with decreasing the SWBNNTs diameter, which is not Physical and Chemical Properties of Carbon Nanotubes so surprising since the N-B-N bond strain and the sp 3 hybridization rapidly increases with decreasing SWBNNTs diameter; 2) as seen in Figure 5, for large sized SWBNNTs, the ω RBM (A 1g ) and ω EDM (E 2g ) mode frequencies converge. For instance, the calculated frequency separation between the RBM and EDM is found to be 3, 7, 21 and 43 cm -1 , when n has the values 26, 25, 22 and 19, respectively. Thus, one can anticipate the (0, 28)-SWBNNT would have unresolvable RBM and EDM bands for the experimental spectra. We can anticipate that the acquisition of Raman spectra for experimental samples consisting of large diameter SWBNNT with the purpose of characterizing the sample in terms of electronic properties and purity may be complicated by the existence of this EDM band, which, in general, can lead to apparent broadening of bands as well as the presence of additional bands that may lead to the erroneous conclusion that more than one type of SWBNNT is present in the sample. Of course, this issue is not expected to be of great significance since the synthesis routes that are presently in vogue do not lead to nanotubes with diameter as large as that corresponding to the (0,26) index. It is to be noted that the E 2g band has lower frequencies than the RBM, (see Figure 5A). This latter band is labeled as EDM for elliptical deformation, which derives from the predominate motions that define vibrational mode motions, as ascertained with the vibration visualization software mentioned earlier. As regards other general conclusions that can be drawn from our calculations for the SWBNNTs, we have found that calculated Raman bands in the mid-frequency region exit nearly size-independent peak positions. As shown in Table 1 or Figs. 5A-B, in the high frequency region there are a few Raman bands of symmetries E 1g /E 2g /A 1g that lie close to one another in frequency. For instance, the calculated Raman modes with symmetries of the A 1g (~1355 ± 10 cm -1 ) and E 2g (~1330 ±25 cm -1 ) approach one another in frequency with increasing diameter of the SWBNNT and then reach a constant values of 1365 and 1356 cm -1 , respectively, as seen in Table 1. A fitting equation indicated that these two Raman bands (with symmetries A 1g at ~ 1355 ±10 cm -1 and E 2g (~1330 ±25 cm -1 ) first increase in frequency then approach a constant value of ~1366 and ~1360 cm -1 , respectively, with increasing diameter of the (0,n)-SWBNNT, n=25. Furthermore, the resonance Raman experiments [60,61] have been shown that there is only one strong band at 1355 ± 10 cm -1 in high energy region for the boron nitride nanotubes. Thus, the calculated these Raman bands at A 1g (~1355 ± 10 cm -1 ) and E 2g (~1330 ±25 cm -1 ) are not only in good agreement with experiments, but also the calculations suggest that only the Raman band(s) (of the symmetry of A 1g and/or E 2g ) are theatrically enhanced by resonance excitation of the boron nitride nanotube. Furthermore, the predicted shifts in the peak positions may result from the nanotube curvature effect as mentioned in Refs. 48(f-h), the curvature energy of the nanotube brings about dissimilar force constants along the nanotube axis and the circumference direction. Therefore, the nanotube geometry causes a force constant reduction along the tube axis compared to that in the circumferential direction. Consequently, the curvature effect might play crucial role in the shift of the peak positions of the G-band as well as the RBM band, as mentioned earlier. In addition, the calculated Raman band positions for bands at ~ 1240 ± 30 cm -1 are found to be slightly size dependent, exhibiting a slightly blue shift with increasing diameter of the SWBNNTs. This disorder induced mode is also important for the characterization and 1039 1040 1040 1039 1040 1040 1040 1040 1040 1039 1039 1039 1039 Physical and Chemical Properties of Carbon Nanotubes the defect on the nanotube as observed a broad feature around in the spectrum of the Almodified MWBNNTs [63]. For example, in the resonance Raman enhanced spectrum, the relative intensity of the disorder mode increases relative to the intensity of the breathing and tangential modes since there is a defect on the nanotube surface as a result of chemical functionalization or caused by structural deformation. For the carbon nanotubes (CNTs), the experimental studies have showed that the increase in the intensity ratio (I D /I G) indicates an increase in the number of defects on the sidewall of the nanotube. This is expected result of the introduction of covalently bound moieties to the nanotube framework, in which significant amount of the sp 2 carbons is converted to sp 3 hybridization. DWBNNTs: While Figure 6 provides the calculated nonresonance Raman spectra for the (0,n)&(0,2n)-DWBNNTs, with n ranging from 6 to 9; Figure 8 provides diagrams of the atomic motions associated with the vibrational frequencies for the (8,0)&(16,0)-DWBNNT used as a representative case. The calculations show that the frequencies of the radial breathing modes (RBMs) and tangential modes (TMs, known as G-mode) of (n,0)&(2n,0)-DWBNNT (with n=6 to 9) significantly differ from those calculated for the (0,n)-SWBNNTs (see Figure 7 and Table 1). The results of the calculations are summarized below. In the low frequency region, the calculated Raman spectra of these DWBNNTs exhibited two RBM modes resulting from the radial motion of the inner-and outer-shells, as shown in Figure 6, and both of these RBM modes are strongly diameter dependent. A large gap between RBMs in the Raman spectra of the DWBNNTs decreases with increasing diameter of the inner-and outer-shells (as seen in Figure 6). Comparing these calculated RBMs in the spectrum of the Table 1). The relative distances between RBMs in the spectra of (0,n)&(0,2n)-DWCNTs are greater than the separation between corresponding RBMs in Raman spectra of (0,n)-and (0,2n)-SWCNTs. For instance, the distance between the RBMs for (0 where d t stand for the shell diameter. The tentative fitting equations reproduced calculated RBMs within 0.5 cm -1 error range for both inner-and outer-tubes. Another Raman bands below RBM modes in the spectra of the SWBNNTs are blue-shifted relative to the corresponding peaks in the spectra of their corresponding DWBNNTs. For instance, these Raman Figure 6), respectively. Moreover, Y. Bando et. al. [62] have studied Raman spectra of the multi-walled boron (natural 11 B and isotope 10 B) nitride nanotubes (MWBNNT and MW 10 BNNT). Their Raman spectra of the MWBNNT and MW 10 BNNT showed only one strong Raman peak at 1366 and 1390 cm -1 , respectively, in the range of 1200 to 1500 cm -1 , which is assigned to a BN stretching deformation vibration mode. This measured Raman peak is in good agreement with our calculated Raman peak (E 1g ) at 1373 cm -1 in the calculated nonresonance Raman spectrum of the (0,8)&(0,16)-DWBNNT, which is resulting from the BN stretching along tube axis, including bending deformation of the NBN/BNB bonds along tube axis. Additionally, Obraztsova and coworkers [63] have studied comparative Raman spectra of the multi-walled boron nitride nanotubes (MWBNNTs) samples before and after Al ion modifications have been investigated. Two features in the Raman spectra were observed: one at 1366 cm -1 that corresponds to in-plane vibrations between B and N atoms and broad feature around 1293 cm -1 in the spectrum of the Al-modified MWBNNTs. The broad peak around 1293 cm -1 is consistent with the calculated Raman feature around 1250 cm -1 in the spectra of the DW-and SW-BNNTs. IR Spectra of Single-Walled and Double-Walled Boron Nitride Nanotube Zigzag-SWBNNTs: Figure 9A provides calculated IR spectra for the (n,0)-SWNTs, where n ranges from 6 to 19. As evidenced in Figure 9, the calculated IR spectra exhibited seven peaks of symmetries E 1 u and A 1u are slightly depend on the SWBNNTs diameter. In the range of 1000 to 1550 cm -1 , relatively very weak six IR features of symmetries E 1u are centered: ~ 1475 ± 25, ~1330 ± 30, ~1230 ± 30, ~1030 ± 5 cm -1 , and other two weak peaks with symmetry A 1u are centered ~1495 ± 15 and ~1350 ± 15 cm -1 . The strongest one with symmetry E 1u is centered 1395 ± 30 cm -1 . In the range of mid frequency, the calculated IR spectra of the (0,n)-SWBNNTs (n= 6 to 19) exhibited only one weak peak centered 805 ± 15 cm -1 . The analytical expressions for this calculated high frequency band as functions of third order in inverse of the (0,n)-SWBNNTs diameter are given by the equations: Figure 9B. In the low frequency region, the IR spectra exhibited many IR features; however, their intensities are extremely weak or vanish as seen in Figure 9A. Furthermore, we provided the vibrational mode assignments and frequencies for the IR spectra of the isolated zigzag-SWBNNTs in Tables 2. DWBNNTs: While Figure 10 provides the calculated IR spectra for the (0,n)&(0,2n)-DWBNNTs, with n ranging from 6 to 9; Figure 11 provides calculated IR spectra of the Table 2. Moreover, Y. Bando [62] have studied FTIR spectra of the multi-walled boron (natural 11 B and isotope 10 B) nitride nanotubes (MWBNNT and MW 10 BNNT). Their FTIR spectra of the MWBNNT and MW 10 BNNT revealed blue degraded strong IR peak at 1376 and 1392 cm -1 , respectively, which is assigned to a B-N stretching deformation vibration mode. This measured IR peak is in good agreement with our calculated IR peaks at 1367 cm -1 (E 1u , resulting from the bending deformation of the NBN/BNB bonds along tube axis) and 1424 cm -1 (E 1u , due to the BN stretching along tube axis, including bending deformations of NBN/BNB bonds). Author also observed a relatively weak and broad IR features at ~800 cm -1 and suggested that this IR peak is due to the existence of some B-O bonds in their BN nanotubes, see Figure 4 in Ref. [62]. However, our calculated IR spectra of the SWBNNTs and DWBNNTs exhibited IR feature with relatively weak around 800 cm -1 is as a result of the out-of surface bending deformation of NBN/BNB bonds on the boron nitride nanotube. Therefore, we suggest that this IR peak (~800 cm -1 ) may originate from the boron nitride nanotube. Electronic transition energies of DWBNNT and SWBNNTs As mentioned in the introduction to this section, boron nitride nanotubes (BNNTs) can be viewed as modified CNT, but their electronic properties differ from carbon nanotubes. For instance, depending on their chirality and the radius, although carbon nanotubes can be either metallic or semiconducting, all boron nitride nanotubes (BNNTs) are semiconducting materials with a large band. And since the band gap is large, the gap energy is only weakly dependent on the diameter, chirality, and the number of the walls of the tube. Furthermore, owing to their semiconducting character, BNNTs, like CNTs, themselves are also very interesting materials for application in nanoscale devices, and have been considered alternatives to CNTs. The DWBNNTs as well as the doped BNNTs nanotubes may show a dramatic change relative to the isolated nanotube. On account of the strong interactions between elec- Table 2. DFT-calculated IR vibrational frequencies (in cm -1 ) and assignments for (0,n)-SWBNNT and (0,n)&(0,2n)-DWBNNTs at the B3LYP/6-31G level. ) shows the electron excited from both inner-and other-shells to mostly inner shells, also there is a significant sigma-bonding interactions between inner-and outher-shells. Finally, the S 0 (A')→ S 11 (A') transition indicate that the transitions from both shells to the excited state mainly are due to sigma-bonding interactions. We also calculated the triplet-triplet transitions, which produce many dipole allowed transitions. The SCF corrected electronic transitions of the singlet-singlet and triplet-trpilet of the (0,6)&(0,12)-DWBNNT, together with the singlet-singlet transitions, are given in Figure 12. As seen in Figure 12 and Table 3, upon irradiation, there is the possibility of a system that can undergo internal conversion (IC) and intersystem crossing (ISC) processes via vibroelectronic coupling, besides the photochemical and other photophysical processes. The IC and ISC processes would able to be expected when taking account of the small distance between the electronic energy levels and range of the vibrational spectra of the DWBNNTs. Furthermore, Figure 4A provides the calculated electron density of (0,6)&(0,12)-DWCNT (double-walled carbon nanotube), showing that the first four highest occupied molecular orbitals (from HOMO to HOMO-3 with the A 1u , A 2g and 2E 1g symmetries, respectively) belong to the outer-shell, and the next highest occupied molecular orbitals from HOMO-4 to HO-MO-24 include both inner-and outer-shells of (0,6)&(0,12)-DWCNT. The lowest unoccupied molecular orbital LUMO (E 1u ), lying about 0.780 eV above the HOMO (A 1u ), belongs to the outer-shell, while the next one (B 2u ) belongs to the inner-shell and lies 0.849 eV above the HOMO (A 1u ). The calculated electron density also indicates that an intratube (inner and outer tube) interaction may possibly take place in the excited state: the LUMO+7 with A 2u symmetry and 2.494 eV above the HOMO (A 1u ), LUMO+8 (E 1u ; 2.557 eV), LUMO+10 (E 1g ; 2.563 eV) and LUMO+15 (E 1g ; 3.637 eV). The intratube CC σ-bonding interaction in the excited state may lead to an intertube charge transfer, which can be observed by a significant change in the tangential modes (TMs) of Raman spectra when the tube is excited to its intratube charge transfer state. The TM may provide information not only about the metallic or semiconducting character of nanotubes, but also on the inner-outer tube (intratube) charge transfer. Physical and Chemical Properties of Carbon Nanotubes Furthermore, upon irradiation, a system can undergo internal conversion (IC) and intersystem crossing (ISC) processes, besides the photochemical and other photophysical processes. Transient intermediates are likely to form in the IC and ISC radiationless processes, which is also known as "dark processes". Our calculations also indicated that possibilities of the IC and ISC processes via vibroelectronic coupling, besides the photochemical and other photophysical processes. For instance, based on the calculated electronic transitions as seen in the Table 4, when the (0,8)&(0,16)-DWBNNTs are excited, all of the excited nanotubes may not directly return back to their ground state by emission of a photon, S k>0 →S 0 transition, but some of them may return back to their ground states (S 0 ) by the IC (internal conversion), for instance, when the system is excited into a higher vibroelectronic state (S 6 , 5.47 eV ), it may undergo into the S 1 state (5.39 eV) via vibrational coupling between these two states before undergoing additional vibrational relaxation back to the lowest singlet electronic energy level (S 1 ), which is called internal conversion (IC), then, followed by transition from the second lowest singlet electronic energy level S 1 (5.39 eV) to S 0 by emission of a photon is so-called fluorescence. An alternate pathway for a molecule in the S 1 state involves an intersystem crossing (ISC) by the nanotube into the lowest triplet electronic state T 1 (5.28 eV). From T 1 , the nanotube can undergo radiative de-excitation via a much slower process, which is known as phosphorescence (T 1 → S 0 transition) such as illustrated in Figure 12. Table 4, the calculations also indicated that the possibilities of the IC from the S k (k=3,4,7-9, and 14) to S 1 as well as ISC proces from the singlet electronic state S 1 (5. The key conclusions on the calculated electronic spectra indicates that the first dipole allowed electronic transitions of the (0,n)&(0,2n)-DWBNNTs (n = 6,8,9) lead to a charge transfer process from outer shell to the inner shell. Moreover, there is a significant intertube σbonding interactions between the inner-and outer-shells occurs with decreasing distance between the interwall of the DWBNNTs, in contrast, for the (0,9)&(0,18)-DWBNNT, there is a relatively weak contributions to the charge transfer process from the inner-shell to outershell. Covalently functionalized zigzag-SWCNTs Carbon nanotubes have broad range of potential applications from medical to industry fields due to their unique structural, mechanical, and electronic properties, as mentioned in the introduction section. Different functionalization methods such as chopping, oxidation, wrapping and irradiation of the CNTs can lead to active bonding sites on the surface of the nanotubes. In this section, we calculate, for covalently functionalized carbon nanotubes (f-CNTs), such parameters as the curvature energies referenced, IR and Raman spectra, and vertical electronic transitions. The latter one may be important to understand the optical mechanism for the charge transfer between functional group(s) and CNT as well as internal conversion and intersystem crossing, as well photochemical process that may occur. The structure of the functionalized-single-walled carbon nanotubes, f-(n,0)-SWCNTs, constructed of functional group(s) covalently bound on the (n,0)-SWCNTs,of two unit cell length, has been investigated. The most stable of the geometry has been obtained by full optimization without any symmetry restriction. The optimized structure indicated that the cylindrical shape of the nanotube is altered to an elliptical form when two molecules attached to the surface of CNT; but the structure remains almost cylindrical with C 4 symmetry, when four functional groups are bound. When we used benzenesulfonic acid (ph-SO 3 H; C 6 H 5 SO 3 H) as a functional group that covalently bonds on the surface of the (n,0)-SWCNTs, n=6 to 12, the curvature energy per hexagon, ( ∆ E f -( n, 0 ) -SWCNTs ),of the functionalized-(n,0)-SWCNT calculated relative to that of the corresponding isolated species is given by the following equation: where E[f-(n,0)-SWCNTs], E[f] and E[(n,0)-SWCNTs] indicates the global energy of functionalized-(n,0)-SWCNT, isolated benzenesulfonic acid (C 6 H 5 SO 3 H) and isolated (n,0)-SWCNT, respectively,The f and n stand for the functional group and chiral index of the zigzag-CNTs. The plot of the calculated relative curvature energy is given in Figure 13. As seen in the Figure 11, the relative curvature energy for the metallic and semiconducting CNTs are well separated. Based on the predicted value of the energies, the results suggested that the covalently functionalization of the SWCNT, with small diameters, are energetically more stable than that with large diameters for the metallic nanotubes. However, for semiconducting nanotubes, the functionalization of the tube is favorable, but the functionalization of the (11,0)-SWCNT is more favorable than (10,0)-SWCNTs. In order to make a correct overall assessment, we need to more data, at least for semiconducting zig-zag nanotubes. Raman spectra of functionalized zigzag-SWCNTs The calculated nonresonance Raman spectra for the covalently functionalized-(n,0)-SWCNTs with benzenesulfonic acid (-ph-SO3H) and the isolated (n,0)-SWCNTs (where n = 7 to 10), as well as the spectrum of the functionalized (7,0)-SWCNT with the carboxylic acid (-COOH), for comparison, are shown in Figure 14. Because of the similarity of the Raman spectra of the f-SWCNTs, here we only discuss the Raman spectra for the functionalization of the (7,0)-SWCNT with the benzenesulfonic acid and carboxylic acid, and the spectrum of the isolated (7,0)-SWCNT. The Raman spectra of both functionalized (7,0)-SWCNT exhibited many new features relating to the spectrum of the isolated (7,0)-SWCNT as well as shift in the peak positions. The predicted results are summarized below. In the low energy region below 600 cm -1 , 1) one of the important Raman peak, which is the radial breathing mode (RBM), was predicted at 410 cm -1 in the isolated (7,0)-SWCNT shifted not only to 390 and 385 cm -1 in the spectra of the (7,0)-SWCNT functionalizes with benzenesulfonic acid and carboxylic acid, respectively, but also enhanced in both spectra; 2) the relatively peaks at 109 and 111 cm -1 result from the elliptical deformation of the carbon nanotube are respectively shifted to 75 and 121 cm -1 (in the spectrum of the (7,0)-SWCNT functionalizes with benzenesulfonic acid), and to 95 and 134 cm -1 in the Raman spectrum of the functionalization of the (7,0)-SWCNT with carboxylic acid, the intensity enhanced in both spectra of the functionalized tube; 3) a doubly degenerated peak predicted at 284 cm -1 (as a result of diagonal expansion of the tube ) in the Raman spectrum of the isolated tube is split into well separated two peaks and appeared at about 250 and 306 cm -1 in the spectrum of each (7,0)-SWCNT functionalizes with benzenesulfonic acid and carboxylic acid; 4) a relatively very weak peak at 500 cm -1 in the spectrum of the isolated tube appeared at same position, but its intensity significantly enhanced, in the calculated both Raman spectra of the (7,0)-SWCNT functionalizes with benzenesulfonic acid and carboxylic acid, which is ; 5) many relatively weak Raman features (result from the out-of-plane structural deformation of the functional groups) appeared below 600 cm -1 as seen in the Figure 14 and 15. In the range from 600 to 1250 cm -1 , the Raman spectra of the f-(n,0)-SWCNT exhibited many relatively medium, weak and very weak new Raman peaks beside the peaks appeared at 760, 794 and 911 (very weak) cm -1 in the Raman spectra of the f-(n,0)-SWCNT. For instance, in the Raman spectrum of the (7,0)-SWCNT functionalizes with benzenesulfonic acid, the peaks with relatively intense at 1120 cm -1 (due to the structural deformation of the tube, including wagging of CH bonds of the benzene ring); at 1138 cm -1 (as a result of asymmetric CSO bond straching and OH bond wagging, including relatively weak bending deformation of the benzene ring); at 1142 cm -1 (structural deformation of the tube due to the CC streching, accompanied by wagging of Hs on the benzene ring), and the peak at 1185 cm -1 is owing to asymmetric CSO bond stretching and wagging of OH bond. The Raman spectrum of the (7,0)-SWCNT functionalizes with carboxylic acid exhibited relatively strong Raman features at 1122 cm -1 (caused by structural deformation of the nanotube, including OH bond wagging); 1146 cm -1 ( by reason of asymmetric streching of CCO(H) bond, including tube deformation), and the calculated Raman peak at 1181 cm -1 is due to asymmetric streching of CCO bonds, including tube deformation. The Raman peaks at 760 cm -1 ( due to expansion of the tube along the tube axis) and 795 cm -1 (as a result of out-of-surface bending deformation of the tube) in the Raman spectrum of the isolated (7,0)-SWCNT at the same positions of the f-SWCNT). A strong peak at around 1225 cm -1 in the spectra of the (7,0)-SWCNT and f-(7,0)-SWCNT is completely originates from the wagging of the CH bond at end of the tube. There are also many very weak Raman features appeared in this range from 600 to 1250 cm -1 . In the range from 1300 to 1800 cm -1 , two peaks at 1300 cm -1 (weak) and 1330 cm -1 (strong) in the spectrum of the (7,0)-SWCNT functionalizes with benzenesulfonic acid and with carboxylic acid, as a result of symmetric stretching of CCC bonds and bending deformations along tube axis, which correspond to a relatively weak and doubly degenerate Raman feature at 1305 cm -1 . A doubly degenerated peak (relatively very weak) at 411 cm -1 (result from asymmetric stretching of CCC bonds within the tube) in the Raman spectrum of the SWCNT is split into two weak peaks at about 1390 and 1405 cm -1 in the Raman spectrum of the functionalized (7,0)-SWCNT. The Raman peak with medium intense at 1486 cm -1 , resulting from CC bond stretching within the nanotube, corresponds to the peak at ~1504 cm -1 in the Raman spectrum of the functionalized (7,0)-SWCNT. The strongest and doubly degenerate Raman peak at 1574 cm -1 in the isolated (7,0)-SWCNT, resulting from asymmetric stretching of the CCC bonds along circumference direction of the tube, is blue shifted to nearly degenerated peak at 1590 and 1595 cm -1 , as a result of the CC bond stretching within the tube, in the Raman spectra of the f-(7,0)-SWCNT. In this range from 1300 to 1800 cm -1 , the Raman spectra of the (7,0)-SWCNT functionalizes with benzenesulfonic acid and with carboxylic acid showed many new Raman features. For example, the strongest peaks appeared at ~1380 and ~1390 cm -1 are as a result of asymmetric tube deformation due to the CC bonds stretching, which is not shown in the isolated (7,0)-SWCNT. The peaks at 1373 and 379 cm -1 in the spectrum (7,0)-SWCNT functionalizes with benzenesulfonic are mainly due to the asymmetric stretching of the OSO bond and wagging of the OH bond, including asymmetric stretching of the CCC bonds of the benzene ring. The peaks: at 1471 and 1482 cm -1 , which is the result of the CC bond stretching within the tube; at , 1548, and 1557 cm -1 is due to asymmetric CCC bond stretching within the tube, however, the peak at 1547 cm -1 is entirely due to symmetric stretching of the CC bonds of the benzenesulfonic acid . Furthermore, the predicted Raman peak at 1650 cm -1 is due to CC bond stretching of the benzene ring, including CH bond wagging on the benzene ring. A very weak peak at 1806 cm -1 is as a result of the CO stretching of the carboxylic acid only. As a result of the (7,0)-SWCNT functionalizes with benzenesulfonic acid and with carboxylic acid (f-(7,0)-SWCNT), the key conclusions on these calculated Raman spectra of the f-(7,0)-SWCNT are summarized below: 1) the RBM is red shifted as much as 25 cm -1 ; 2) many new peaks appeared in the disorder (D) mode range from 1300 to 1450 cm -1 , which is due to the structural deformation of the tube and of the functional groups bound to the tube (7,0)-SWCNT); 3) the tangential (or G) mode is blue shifted as much as 20 cm -1 , as a result of the functional groups bound to the tube; 4) above the G-mode, appeared new Raman feature in the spectra of the f-(7,0)-SWCNT belong to the functional groups (benzenesulfonic acid and carboxylic acid); 5) the new Raman features are found to appear along the spectrum, which is owing to the combination of the structural deformation of the tube and the functional groups; 6) for the benzenesulfonic acid, while the CH bond stretching mode occurred range from 3200 to 3240 cm -1 , the OH bond stretching appear at 3703 cm -1 ; for the carboxylic acid, the OH bond stretching is predicted at 3678 cm -1 ; the CH bond stretching of the tube are predicted in the range from 3172 to 3200 cm -1 .; 7) the RBMs of frequency in the calculated Raman spectra of the functionalized (n,0)-SWCNT , (n=6 to 11) are slightly red-shifted relative to that for isolated SWCNTs as seen in Figure 15. The relative shift in frequency of the RBM decreases with increasing tube diameter. It is worth nothing that the relative intensity of the peaks in the resonance Raman spectra significantly change. Because of the technical difficulty and calculation time, it is very difficult to calculate resonance Raman spectra. Furthermore, in the low frequency region below 600 cm -1 , there are many relatively very weak Raman peaks, which result from out-of-plane motion, or twisting of the phenyl group. These types of Raman bands of the functionalizated the CNTs may significantly enhanced in the resonance Raman spectrum (RRS) since there is a significant dipole-dipole interaction between the functional groups. This may play a crucial role and might be used as signature for the alignment of the CNTs in two dimensional networks, but also, the presence of additional bands may lead to the erroneous conclusion that more than one type of SWNT is present in the sample. For instance, the Raman band(s) resulting from out-of-plane motions are dramatically enhanced when dye molecule aggregate, and are referred to as J-or H-type aggregates.[48(a-d)] New Raman peaks appeared around 1550 cm -1 due to the symmetric stretching of the CCC bonds and rocking of CH bonds in phenyl group of the benzenesulfonic acid. Several new Raman peaks result from only benzenesulfonic acid or combination of benzenesulfonic acid and nanotube dispersed throughout the spectrum. The Raman peak resulting from the stretching of CC sigma bonding between benzenesulfonic acid and SWCNTs is very weak and appear at about 1208 cm -1 . In the low frequency region, there are many relatively very weak Raman peaks below 600 cm -1 , which result from out-of-plane motion, or twisting of the phenyl group. These type of Raman bands of the functionalized-CNTs can play a crucial role and might be used as signature for the alignment of the CNTs in two dimensional networks. For instance, the Raman band(s) resulting from outoff plane motions are dramatically enhanced when dye molecule aggregate, and are refer-Physical and Chemical Properties of Carbon Nanotubes 82 red to as J-or H-type aggregates.[48(a-d)] It is also worth that the calculations produced nonresonance Raman spectra which differ from the resonance Raman spectra in terms of intensity. Furthermore, the CH stretching of the end group of the CNT appear at around 3185 cm -1 , the CH stretching of the benzenesulfonic acid and OH stretching of the carboxyl group are, respectively, at about 3590 and 3680 cm -1 . The RBMs of frequency in the calculated Raman spectra of the functionalized SWCNT are slightly red-shifted relative to that for isolated SWCNTs as seen in Figure 15. The relative shift in frequency of the RBM decreases with increasing tube diameter. IR spectra of functionalized SWCNTs As provided in Figure 16, the predicted IR spectra of the (n,0)-SWCNT exhibits strong IR peaks centered at 890 and 845 cm -1 ; however, the IR spectra of the functionalized (n,0)-SWCNTs display many new strong with relatively weak IR peaks dispersed through spectra, such as at 1650, 1275, 1150, 791, 570, 380, 143 cm -1 . Also, in range of 3000-4000 cm -1 , the CH and OH stretching modes of the benzenesulfonic acid and carboxylic acid are found to appear at around 1590 and 1670 cm -1 , respectively. The C=O bond resulting from C=O stretch of thecarboxyl groups, which is experimentally observed at 1782 cm -1 in the FTIR spectra of MWNT, after electron-beam irradiation by Eun-Ju Leeet al. [50], is predicted at 1800 cm -1 from the calculation. The peaks found around 1650 cm -1 are mainly due to the C-C stretching and CCC bonding deformations; asymmetric and symmetric stretching of the O=S=O group in the benzenesulfonic acid group are found at 1275 and 1150 cm -1 , respectively; S-OH stretching appears at 780 cm -1 ; bending deformation of the SO 3 H,mimicking opening and closing of an umbrella,appears at 570 cm -1 ; out-off plane motion of the phenyl group of the benzenesulfonic acidappears at 380 cm -1 ; and twisting of the O=S=O bend appears at about 143 cm -1 . Vertical electronic transitions of functionalized SWCNTs We calculated the vertical electronic transitions for (n,0)-SWCNTs functionalized with benzenesulfonic acid. The functionalized-SWCNTs were constructed as two-and four-func-tional groups covalently attached to (7,0)/(9,0) and (12,0)/(8,0)-SWCNTs with length equivalent to two unit. Table 5 provides calculated electronic transitions of functionalized and isolated SWCNTs; selected calculated electron density for the HOMOs and LU-MOs states involved in the electronic transitions are provided in Figure 17. The results of the calculations clearly indicate that both of the dipole allowed and forbidden electronic transitions are lowered as much as 0.8 eV relative to the transition energies of thecorresponding isolated SWCNT. Furthermore, the calculations also showed that below 2.5 eV there is no electron transfer from the nanotube to the functional group, or vice versa. However, the calculated electronic densities suggest that there would be intrasystem charge transfer between molecule and the nanotube. Because of the distance among the electronic energy levels is very small for some of the dipole allowed and forbidden electronic transitions, radiationless transitions are expected as a result of vibrational coupling or surface touching of the electronic potential energy surfaces. Coupling maybe very large and might lead to internal conversion (IC), again due to vibroelectronic coupling, which might be observable via fluorescence spectroscopic techniques, as discussed and illustrated in Figure 1 in the introduction section. We also would like to point out that while isolated SWCNTs exhibit one or a few dipole allowed electronic transitions below 2.5 eV, the functionalized SWCNTs produced many dipole allowed electronic transitions compared with the corresponding isolated SWCNTs, in addition to lowered electronic transitions. Table 5. Calculated vertical electronic transition energies (Te; in eV), S 0 →S k , of the mF-(n,0)-SWCNTs with that for the isolated (n,0)-SWCNTs for comparison with their oscillator strengths (f). Where m indicated the number of functional groups covalently bound to the (n,0)-SWCNTs and F symbolizes the benzenesulfonic acid used as functional group in this study. Study of polyynes encapsulated into single-walled carbon nanotube One-dimensional carbon atomic wires displaying sp hybridization have an attractive electronic and vibrational structure which severely affects their optical and transport properties. These kinds of structure have received researchers' interest because of their purely sp-hybridized carbon structure that is expected to display a completely different behavior than the more common sp 2 and sp 3 carbon structures. Polyyne molecules are linear carbon chains having alternating single and triple bonds, and ended by end atoms or groups. A. Milani et al. [51] have investigated the charge transfer in carbon atomic wires (polyynes) terminated by phenyl rings and its effects on the structure of the system using normal Raman and surface-enhanced Raman spectroscopy (SERS) techniques as well density functional theory (DFT) calculations forthe Raman modes. They reported that the occurrence of a charge transfer between polyynes and metal nanoparticles (both in liquids and supported on surfaces) is evidenced by Raman and SERS as a moderating of the vibrational stretching modes. They suggested that carbon wires alter their structure toward a more equalized geometry (i.e., all double bonds) as a consequence of the charge transfer. They also pointed out that these observations open potential perspectives for developing carbon-based atomic devices with tunable electronic properties. Therefore, it is necessary to carry out more experimental and theoretical investigation to get insight of them. Even though the molecules like polyyne are very unstable at normal temperature and atmosphere conditions. [52,53], it has beenreported that they are astoundingly stable inside single wall carbon nanotubes (SWCNT) even at high temperature (300 o C) [54,55]. The Raman spectrum of the polyyne molecules exhibited two intense Raman shifts appear around 2000 -2200 cm -1 , which are labeled as α-bands and β-bands. The band positions of these two bands decrease in frequency with the increase in polyyne size. With the increasing chain lengths, while the frequency of the α-band almost linearly decreases, the position of β-bands is oscillating, and the difference between β-bands and α-bands in frequency shifts are dissimilar in polyyne molecules with different size. Furthermore, L. M. Malard et al. [56] studied resonance Raman study of two polyyne molecules (C 10 H 2 and C 12 H 2 ) encapsulated inside the SWCNT using various different laser lines including the whole visible range. They indicated that the main Raman features associated with stretching modes of the linear chains in both samples (C 10 H 2 @SWCNT and C 12 H 2 @SWCNT) are strongly enhanced around 2.1 eV, while the optical absorption observed when these molecules are dispersed in isotropic medium [57] or in the gas phase [58] occurs above 4.5 eV. They concluded that dipole-forbidden (dark) transitions of the polyynes that become active as a result of a symmetry breaking when the molecules are encapsulated inside the SWCNT. In this section, we will discuss the calculated results for the polyyne (C 10 H 2 ) molecules encapsulated within (6,0)-SWCNT. Figure 18 and Table 6 provide the calculated electron density and energy levels of the molecular orbitals (MOs), HOMOs and LUMOs, of C 10 H 2 @(6,0)-SWCNT, respectively. The geometry optimization with/without symmetry restriction found the point group is respectively D 6H and D 2H symmetries. The structure with D 2H has the lowest energy as much as 0.19 eV than the structure with D 6H and both structure has the 1 A 1G electronic symmetry for the C 10 H 2 @(6,0)-SWCNT system. For the isolated C 10 H 2 (polyyne), predicted electronic symmetry is Σ 1 1G and has the D ∞H point group. As seen in Figure 18, the plotted electron density showed that while three of first five highest occupied molecular orbitals (HOMO/HOMO-3/HOMO-4 with the A g , B 1u and A 1u symmetries, respectively) only belong to the (6,0)-SWCNT, the HOMO-1 and HO-MO-2 with the B 2u and B 3u symmetry belong not only to both of the C 10 H 2 @(6,0)-SWCNT and but also there is a significant bonding interaction between the polyyne molecule (C 10 H 2 ) and (6,0)-SWCNT in the ground state. As seen in Table 6, the lowest unoccupied molecular orbital, LUMO (B 3u )/LUMO + 1(B 2g )/LUMO + 4 (A g ) LUMO + 5(B 1g ) and LUMO + 6(B 1u ) and lies about 0.43/0.43/0.89/0.89 and 1.66 eV above the HOMO (A g ) belong to the SWCNT only and the LUMO + 1(B 3u )/LUMO + 2 (B 2u ) belongs to the polyyne molecule and the SWCNT. However, the LUMO + 7(B 3g )/LUMO + 8(B 2g ) and LUMO + 9(B 3g ) belong not only to both the C 10 H 2 @(6,0)-SWCNT (lies 1.99 /1.99 and 2.39 eV above the HOMO (A g )), but there is a significant sigma bonding interaction in the excited states as seen in Figure 16. The bonding interactions between C 10 H 2 and (6,0)-SWCNT in the ground state leading to the increase the triple bond lengths and decrease the double C-C bond lengths within the polyyne molecule (C 10 H 2 ) when encapsulated inside the (6,0)-SWCNT relative to its corresponding bond distance of the isolated single polyyne chain molecules (C 10 H 2 ). For instance, C-C bond distances in the encapsulated C 10 H 2 molecules: the SWCNT, which was observed between the polyyne and nanoparticles by the SERS as mentioned above. The calculated vertical dipole allowed electronic transitions (S 0 → S n ) of the C 10 H 2 @(6,0)-SWCNTs up to 0.52 eV are given in Table 7. Because of the technical difficulty, it was unable to calculate the higher electronic transitions that can provide more detailed information about internal conversion (IC) and inter system crossing (ISC). The lowest dipole allowed vertical electronic transitions S 0 (A 1g )→ S 7 (B 3u ) as results of the HOMO-3→ LUMO + 1 and HOMO→ LUMO + 2 transitions and S 0 (A 1g )→ S 7 (B 2u ) transition as a result of the HOMO-4->LUMO + 1 and HOMO→ LUMO + 3 transitions,the second lowest dipole allowed vertical Table 6. Calculated energy levels ΔE(eV) of the molecular orbitals (MOs) for the C 10 H 2 @(6,0)-SWCNTs and C 10 H 2 relative to their the highest molecular orbital (HOMO) Physical and Chemical Properties of Carbon Nanotubes electronic transitions S 0 (A 1g )→ S 11 (B 3u ) due to the HOMO-3→ LUMO + 1 and HOMO→ LU-MO + 2 transitions, and S 0 (A 1g )→ S 12 (B 2u ) transition because of the HOMO-4→ LUMO + 1 and HOMO→ LUMO + 3 transitions clearly indicate that the existence of charge transfer from the SWCNT to the polyyne molecules when examine the electron density of the HO-MO and LUMOs involved in these transitions. When we examine the calculated vertical electronic transitions together with the calculated energy levels of molecular orbitals (MOs) of the encapsulated polyyne molecule inside the SWCNT, the IC and ISC can be expected. Based on these calculations, the molecule encapsulated inside the nanotubes (NTs) can be used as energy conversion systems as a consequence of charge transfer between them. This illustration also can reflect on the intensity of the Raman bands at the resonance excitation energy where the charge transfer takes place between the molecules or particle and the nanotubes.
14,484.4
2013-02-27T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Classical Signaling and Trans-Signaling Pathways Stimulated by Megalobrama amblycephala IL-6 and IL-6R Interleukin-6 (IL-6) is a multipotent cytokine. IL-6 plays a dual role in inflammation through both classical signaling (IL-6 binds membrane IL-6 receptor/IL-6R) and trans-signaling (IL-6 binds soluble IL-6R). However, the regulation of IL-6 activity, especially the regulation of signaling pathways and downstream genes mediated by IL-6 trans-signaling, remains largely unclear in teleost. Grass carp (Ctenopharyngodon idellus) hepatic (L8824) cells, kidney (CIK) cells, and primary hepatocytes were used as test models in this study. First, the biological activity of recombinant blunt snout bream (Megalobrama amblycephala) IL-6 (rmaIL-6) and sIL-6R (rmasIL-6R) was verified by quantitative PCR (qPCR) and western blot. The western blot results showed that rmaIL-6 significantly upregulated signal transducer and activator of transcription 3 (STAT3) phosphorylation in L8824 cells and primary hepatocytes, while rmaIL-6 in combination with rmasIL-6R (rmaIL-6+rmasIL-6R) significantly upregulated STAT3 phosphorylation in all types of cells. Furthermore, maIL-6 and maIL-6+rmasIL-6R could only induce extracellular-signal-regulated kinase 1/2 (ERK1/2) phosphorylation in L8824 cells and CIK cells, respectively. Therefore, IL-6 mainly acts by activating the janus kinase (JAK)/STAT3 pathway rather than the mitogen-activated protein kinase (MEK)/ERK pathway. Finally, the activation of the JAK2/STAT3 pathway was shown to be essential for the generation of socs3a and socs3b induced by IL-6 trans-signaling after treatment by JAK2/STAT3 pathway inhibitors (c188-9 and TG101348). These findings provide functional insights into IL-6 classical signaling and trans-signaling regulatory mechanisms in teleost, enriching our knowledge of fish immunology. The proinflammatory and anti-inflammatory effects of IL-6 appear to originate from its capacity to activate multiple signaling pathways in a cell type-specific manner [16,17]. For classical signaling, IL-6 binds to membrane-bound IL-6R (mIL-6R) and activates intracellular signaling cascades via gp130. IL-6 classical signaling is primarily limited to hepatocytes and immune cells (macrophages and certain other leukocyte populations), which express IL-6R on their surface [18,19]. Thus, the number of cells targeted by IL-6 classical signaling is restricted. The body also produces a soluble IL-6 receptor (sIL-6R) that is released into the circulation after proteolytic cleavage of the mIL-6R protein or following translation from alternatively spliced mRNA [20,21]. Signaling by IL-6 in combination with sIL-6R is called trans-signaling. Due to the uniform distribution of gp130, this excitatory IL-6 and sIL-6R complex can, in principle, activate all cells [20]. The two different signaling events have divergent functions. Classical signaling is associated with regenerative and anti-inflammatory functions, while trans-signaling is linked to pro-inflammatory functions [22][23][24][25]. Inhibitors have been widely used in the study of signaling pathways and in clinical medicine. c188-9, a small-molecule inhibitor of STAT3, targets the phosphotyrosyl peptide binding site within the STAT3 SH2 domain and does not inhibit upstream JAK or Src kinases [26]. c188-9 has been used to inhibit the phosphorylation of STAT3 and the proliferation of cancer cells [27,28]. On the other hand, Fedratinib (TG101348) is a selective JAK2 inhibitor that is indicated for the treatment of adults with intermediate-2 or high-risk primary or secondary myelofibrosis [29]. Sequence Analysis of maIL-6 and masIL-6R As shown in Supplementary Figure S1A, the cDNA sequence of mail-6 is 1045 bp and contains an open reading frame (ORF) of 699 bp, encoding a 232 aa protein with a signal peptide of 24 aa. The full-length cDNA of mail-6r is 3739 bp, with an ORF of 1797 bp, encoding 598 aa. The signal peptide region of maIL-6R is 1-21 aa-long, the extracellular region comprises 1-491 aa, the transmembrane region 492-514 aa, and the intracellular region 515-598 aa (Supplementary Figure S1B). To analyze homology of IL-6 between grass carp (C. idellus) and blunt snout bream (M. amblycephala), multiple sequence alignment was performed. The analysis showed that the similarity in amino acid sequence of IL-6 and sIL-6R between blunt snout bream and grass carp was as high as 90.91% and 86.89%, respectively ( Figure 1A,B). However, the IL-6 proteins from grass carp and blunt snout bream showed some differences in secondary structure and solvent accessibility ( Figure 1C). Effects of Recombinant IL-6 on the Expression of Downstream Genes in L8824 Cells As shown in Figure 2, recombinant grass carp IL-6 (rciIL-6) and recombinant blunt snout bream IL-6 (rmaIL-6) proteins were successfully produced in the inclusion bodies of Escherichia coli. The purified rciIL-6 ( Figure 2A) and rmaIL-6 ( Figure 2B) were visualized by similar single bands around 40 kDa (theoretical MW: 24.27 kDa for rciIL-6 or 24.15 kDa for rmaIL-6 and 18.3 kDa of pET-32a plasmid tag protein) on an SDS-PAGE gel. To evaluate the biological activity of recombinant IL-6, the expression of downstream genes including hamp, il-1β, il-6, socs3a, and socs3b in L8824 cells was analyzed. As shown in Figure 2C, after 4 h of stimulation, the expression of hamp il-1β and il-6 was induced by 0.5 and 1.0 µg/mL of rciIL-6 protein, while socs3b expression was inhibited by 0.5 µg/mL of rciIL-6. Similarly, as shown in Figure 2D, after 2 h of stimulation, the expression of hamp and socs3b could not be significantly modulated by rmaIL-6, but il-6 (at all three doses), il-1β (at 1.5 µg/mL), and socs3a (at all three doses) could be induced by rmaIL-6. As shown in Figure 2E, when L8824 cells were treated with rciIL-6 for different times, the mRNA level of hamp was significantly upregulated at 4 h and then gradually decreased to the control level, whereas, after rmaIL-6 treatment, the expression of hamp peaked at 24 h. In addition, rciIL-6 increased the il-6 mRNA level only at 4 h, while rmaIL-6 could significantly upregulate the expression of il-6 mRNA at 2, 4, and 24 h of stimulation. rciIL-6 significantly downregulated socs3a at 24 h, whereas rmaIL-6 significantly upregulated socs3a expression at 2 h and 36 h. In addition, rciIL-6 significantly increased socs3b expression only at 8 h, while rmaIL-6 significantly induced socs3b expression at both 4 h and 12 h. Activation of Signaling Pathways by rmaIL-6 with or without RmasIL-6R As shown in Figure 3A, rmasIL-6R was successfully obtained. The SDS-PAGE results revealed that the molecular mass of rmasIL-6R is about 60 kDa (the predicted MW of rmasIL-6R is 52.78 kDa, and that of His-tag is~4.8 kDa), which is consistent with the theoretical molecular mass. The gp130 CDS fragment could be detected in L8824 cells, CIK cells, and primary hepatocytes cDNA template, and the il-6r CDS fragment could be detected in L8824 cells and primary hepatocytes cDNA template, but not in CIK cells (Supplementary Figure S2). As shown by western blot, in L8824 cells, STAT3 phosphorylation was induced by rmaIL-6 at µg/mL and 1.5 µg/mL, but rmaIL-6 at 1.5 µg/mL concentration was more effective. (Supplementary Figure S3). In CIK cells, STAT3 phosphorylation was induced by rmaIL-6 combined with lower concentrations of rmasIL-6R (i.e., 0.5 µg/mL) ( Figure 3B). . Sequence alignment of ciIL-6 and maIL-6 (A), cisIL-6R and masIL-6R (B), and secondary structure and solvent accessibility prediction for ciIL-6 and maIL-6 (C). The consensus sequence amino acids are shown in solid black. The differences in solvent accessibility are boxed, and the difference in secondary structure is indicated by arrows. Figure 2. Production and biological activity of recombinant IL-6 of grass carp (rciIL-6) and blunt snout bream (rmaIL-6). SDS-PAGE of rciIL-6 (A) and rmaIL-6 (B) proteins. Lane 1: molecular mass marker; Lane 2: whole-cell lysate of non-induced E. coli; Lane 3: whole-cell lysate of induced E. coli containing the recombinant proteins; Lane 4: purified and refolded recombinant proteins. L8824 cells were treated with different concentrations of rciIL-6 (C) or rmaIL-6 (D). L8824 cells were treated with different concentrations of rciIL-6 or mail-6 for different times (E). The hamp, il-6, il-1β, socs3a, and socs3b mRNA were quantified by qPCR. Gene expression was normalized relative to the reference gene 18S rRNA. Fold changes were calculated by comparing the average gene expression of the treatment groups with that of the corresponding control groups (HI, heat-inactivated protein). (C,D) Student's t-test was used to determine the significance of differences between the experimental and the control groups. (E), One-way analysis of variance (ANOVA) was used to analyze the differences among different time points. Data are presented as mean ± SEM of at least three replicates for each experiment. * p < 0.05, ** p < 0.01. Phosphorylation of STAT3 and ERK1/2 in CIK cells treated with rmaIL-6+rmasIL-6R. The signals of phosphorylated proteins and total proteins were first normalized to β-actin, and the ratios between phosphorylated protein and total protein were calculated. Data are presented as mean ± SEM of at least three replicates for each experiment (D,F). * p < 0.05, ** p < 0.01. On the other hand, when CIK cells was stimulated with rmaIL-6 and rmasIL-6R alone or jointly, it was found that only rmaIL-6+rmasIL-6R could induce STAT3 phosphorylation (Supplementary Figure S3). Therefore, the phosphorylation of ERK1/2 and STAT3 was detected at different time points after CIK cells were stimulated only by rmaIL-6+rmasIL-6R ( Figure 3E). As shown in Figure 3F, CIK cells responded to rmaIL-6 stimulation similarly to L8824 cells, where STAT3 phosphorylation increased significantly at 10 min, peaked at 30 min, and then declined slowly. We then investigated whether IL-6 induced activation of MEK/ERK signaling pathways by measuring the level of the ERK1/2 phosphorylation. Treatment with rmaIL-6+rmasIL-6R caused a strong phosphorylation of ERK1/2 in CIK cells at 60 and 120 min ( Figure 3F). In primary hepatocytes of grass carp, STAT3 phosphorylation was induced by rmaIL-6 or rmaIL-6+rmasIL-6R ( Figure 4A,B). The stimulation of primary hepatocytes with rmaIL-6 or rmaIL-6+rmasIL-6R presented a similar kinetics, with a peak of STAT3 phosphorylation at 10 min ( Figure 4C). However, neither rmaIL-6 nor rmaIL-6+rmasIL-6R could significantly affect ERK1/2 phosphorylation ( Figure 4D). . The signals of phosphorylated proteins and total proteins were first normalized to β-actin, and the ratios between phosphorylated protein and total protein were calculated (C,D). Data are presented as mean ± SEM of at least three replicates for each experiment. * p < 0.05. Moreover, in L8824 cells, STAT3 phosphorylation could be induced by rmaIL-6 alone or in combination with rmasIL-6R, but the effect of the combined stimulation was stronger ( Figure 5A). In contrast, CIK cells responded differently to rmaIL-6 stimulation than L8824 cells. In CIK cells, STAT3 phosphorylation could not be induced by rmaIL-6 or rmasIL-6R alone, but only by their combination ( Figure 5B). Similar to L8824 cells, STAT3 phosphorylation was induced in primary hepatocytes by rmaIL-6 alone or in combination with rmasIL-6R ( Figure 5C) , and primary hepatocytes (C). A representative blot containing phosphorylated proteins, total proteins, and β-actin is shown for each pathway (left column). Ratios of phosphorylated proteins to total proteins were calculated. Data are presented as mean ± SEM of at least three replicates for each experiment (right column). * p < 0.05, ** p < 0.01. Discussion Cytokines play an important role in the immune system. During IL-6 stimulation, STAT3 phosphorylation increased, while persistent activation of STAT3 contributed to IL-6 production in human basal cells [43]. In this study, rciIL-6 and rmaIL-6 could induce the expression of il-6 in L8824 cells, similar to what observed in rainbow trout [37]. Therefore, IL-6 can increase il-6 expression in an autocrine or paracrine fashion and may amplify and exacerbate the inflammatory response. In our work, both rciIL-6 and rmaIL-6 significantly upregulated the expression of il-1β. Previous studies in teleost showed that recombinant IL-6 protein could not affect the expression of il-1β in L. crocea after 24 h of stimulation [34] and even significantly reduced the expression of il-1β and socs3 in rainbow trout at 24 h [37]. In stark contrast, IL-6 rapidly and dramatically induced il-1β expression in Acipenser baeri (Brandt, 1869) spleen 6 h after treatment [44]. These differences may be due to the different durations of IL-6 stimulation. IL-6 has been shown to be a necessary and sufficient cytokine to induce hamp expression in mice, human hepatocytes, and cortical neurons [45,46]. Recombinant IL-6 induced the expression of hamp in rainbow trout macrophages [37]. Our results showed that rciIL-6 could rapidly induce the upregulation of hamp in L8824 cells. In addition, rmaIL-6 had no significant effect on hamp expression in a short time but could significantly upregulate hamp at 24 h in L8824 cells. In fish, socs3 is associated with immune regulation as its expression is modulated by inflammatory stimulants, cytokines, and infection [47]. In our work, both rciIL-6 and rmaIL-6 significantly upregulated the expression of socs3b, but rciIL-6 inhibited the expression of socs3a at a certain time point. Therefore, IL-6 in teleost might play both pro-inflammatory and anti-inflammatory roles, but the mechanism is slightly diverse in different species. This difference may be due to structural differences or to a different refolding efficiency of the recombinant proteins. The general opinion is that IL-6R is present in a few cell types, such as immune cells and hepatocytes, which are directly activated by IL-6 classical signaling [19,48]. In this study, we provide evidence of the existence of membrane-bound IL-6R in L8824 cells but not in CIK cells. IL-6R is important for ligand binding, but it has only a short cytoplasmic domain, and its signal transduction is dependent on the recruitment of gp130 [10,49]. IL-6 is generally believed to activate the JAK/STAT3 pathway, through either soluble or membrane-bound IL-6R. Consistent with this, we found that both rmaIL-6 classical signaling and trans-signaling could trigger STAT3 phosphorylation in a time-dependent manner. However, trans-signaling led to more intense STAT3 phosphorylation than classical signaling. This is also consistent with relevant research in mammals [50,51]. In addition, studies have shown that IL-6-mediated downstream signaling cascade pathways mainly include the JAK/STAT3, MEK/ERK, and PI3K/AKT pathways [52][53][54]. Here, we reported the difference between two signals mediated by rmaIL-6 in different cells. In L8824 cells, classical signaling involves both JAK/STAT3 and MEK/ERK pathways, whereas transsignaling involves only the JAK/STAT3 pathway. In contrast, in CIK cells, IL-6 transsignaling could activate both JAK/STAT3 and MEK/ERK pathways. In mammals, several pieces of evidence indicated reciprocal crosstalk between the MEK/ERK pathway and the JAK/STAT3 pathway [55,56]. In addition, IL-6-type cytokines did not activate ERK1/2, but activated STAT3 in some human cells [57,58]. In primary hepatocytes, both IL-6 classical signaling and trans-signaling could activate the JAK/STAT3 pathway but not the MEK/ERK pathway. These results suggest that IL-6 is critical to the activation of the JAK/STAT3 pathway and may not be key to the activation of the MEK/ERK pathway in grass carp cells. Meanwhile, a strong activation of STAT3 may affect ERK phosphorylation to prevent over-immunity in teleost, which is beneficial to maintain the normal operation of the immune system. It is well known that activation of the JAK/STAT3 pathway leads to STAT3 dimerization and translocation into the nucleus, where it initiates gene transcription [59]. It was shown that socs3 transcription induced by IL-6 lasted at least 48 h in HUVECs cells [60]. In L8824 cells, STAT3 was found to be essential for trans-signaling-mediated expression of socs3a and socs3b. Besides, in L8824 cells and CIK cells, blockade of JAK2 also resulted in complete inhibition of STAT3 phosphorylation as well as of socs3a and socs3b expression induced by trans-signaling. These findings indicate that JAK2 is located upstream of STAT3 in the signaling pathway mediated by IL-6 trans-signaling and that JAK2 is crucial for the induction of socs3a and socs3b. The JAK2/STAT3 inhibitor AG490 reduced hamp mRNA expression even when the cells were exposed to IL-6 [61]. In our study, the expression of hamp was not affected by the rmaIL-6 trans-signaling pathway in the short time, but TG101348 could significantly change its expression in L8824 cells and CIK cells. In previous studies, the JAK inhibitor also acted on other signaling pathways such as MEK/ERK and PI3K/AKT [62,63]. However, whether TG101348 affects hamp expression by inhibiting other signal pathways needs further study. Cell Lines and Fish Because IL-6 and sIL-6R proteins are conserved between grass carp and blunt snout bream, and blunt snout bream has no stable cell line, grass carp hepatic (L8824) cells and grass carp kidney (CIK) cells (Cell Collection Centre for Freshwater Organisms of Huazhong Agricultural University, Wuhan, China) were selected as model cells in this study. L8824 cells and CIK cells were cultured in M199 medium containing 10% fetal bovine serum with 100 U/mL penicillin and streptomycin (Gibco, NY, USA) and were kept at 28 • C in a 5% CO 2 environment. Healthy blunt snout bream (0.5-0.7 kg) and grass carp (1.0-1.5 kg) used in the study were obtained from Fisheries College Aquaculture Base, Huazhong Agricultural University, China. Isolation and Culture of Hepatocytes In this study, primary hepatocytes of grass carp were isolated and cultured according to a previous study [64]. Briefly, prior to the isolation of hepatocytes, the blood of the fish was drawn with a syringe. Then, the liver was rapidly isolated and washed several times in ice-cold phosphate-buffered saline (PBS) (Servicebio, Wuhan, China) containing 500 U/mL penicillin and streptomycin. After removal of PBS using sterile pipettes, the samples were cut into small pieces (about 1 mm 3 ). The small pieces of liver were digested with trypsin at 28 • C for 10 min, then the cells were collected, and the process was repeated 3 times. Thereafter, the cell suspension was centrifuged at 400 g for 10 min and washed twice. The harvested cell pellets were resuspended in M199 medium (Gibco, NY, USA) with 10% fetal bovine serum (Gibco, NY, USA) and 100 U/mL penicillin and streptomycin (Gibco, NY, USA) at a density of 1 × 10 6 cells/mL. Finally, primary hepatocytes were kept at 28 • C in a 5% CO 2 environment. RNA Extraction and cDNA Synthesis Total RNA was extracted with RNAiso Plus (Takara, Shiga, Japan) according to the manufacturer's instructions. The concentration and quality of total RNA were estimated by means of spectrophotometry with NanoDrop 2000 (Thermo Scientific, Delaware, Waltham, MA, USA) and agarose gel electrophoresis. For quantitative PCR (qPCR) analysis, 1 µg of total RNA was reverse-transcribed using the PrimeScript ® RT reagent Kit (Takara, Shiga, Japan) and then stored at −20 • C for further use. Expression and Purification of the Recombinant Proteins ciIL-6, maIL-6, and masIL-6R The mature peptide-coding sequences of ciil-6, mail-6, and masil-6r were amplified by reverse-transcriptase polymerase chain reaction (RT-PCR) using the liver cDNA of grass carp or blunt snout bream as a template. The specific gene primers are listed in Supplementary Table S1. The amplified products were digested by EcoR I/Xho I, BamH I/Hind III, and EcoR I/Hind III, respectively, then ligated into pET-28a/pET-32a, and transfected into BL21 cells (DE3; Tsingke, Jiangsu, China). The colonies were inoculated into 500 mL of Luria-Bertani (LB) medium containing ampicillin (Amp) or kanamycin (Kan) (50 µg/mL), and the culture solution was incubated at 200 r/min and 37 • C until the OD600 value was 0.5-0.6. Then, the recombinant proteins were induced with isopropyl-β-D-thiogalactoside (IPTG) for 10-12 h prior to harvest. After ultrasonication, the recombinant proteins were affinity-purified using the His-Tagged Inclusion Body Protein Purification Kit (CoWin Biosciences, China) according to the manufacturer's instructions. The proteins were analyzed by SDS-PAGE and visualized after staining with Coomassie brilliant blue R-250. Then, the purified recombinant proteins were dialyzed and refolded. The concentrations of the recombinant proteins were determined using NanoDrop 2000 (Thermo Scientific, Delaware, Waltham, MA, USA). The recombinant proteins were aliquoted and stored at −80 • C for further use. All the above experiments were set up with a blank control and three repetitions. After treatments, the cells were collected to extract total RNA or protein. qPCR Analysis qPCR was performed in a Bio-Rad CFX Connect™ real-time PCR system (Bio-Rad, US). The qPCR mixture consisted of 1.0 µL cDNA template, 7.4 µL nuclease-free water, 10.0 µL LightCycler ® 480 SYBR Green I Master (Roche, Switzerland), and 0.8 µL of each forward and reverse primers (10 µM). qPCR was conducted using the following program: 95 • C for 5 min, 40 cycles of 95 • C for 5 s, 60 • C for 20 s, and 72 • C for 20 s, followed by melting curve determination from 65 • C to 95 • C to verify the amplification of a single product. The relative expression levels of the target genes were measured by the 2 −∆∆Ct method [65], and 18S rRNA was used as the internal control [66][67][68][69]. The relative expression levels were indicated as fold change. Plasmid construction (ciIL-6, maIL-6, and masIL-6R) and qPCR primers (il-1β, il-6, hamp, socs3a, socs3b, and 18S rRNA) are shown in Supplementary Table S1. Protein Extraction and Quantification The cells were rinsed with PBS and lysed using RIPA lysis buffer (Beyotime, Shanghai, China). To quantify the proteins, the BCA Protein Assay kit was used (Beyotime, Shanghai, China) according to the manufacturer's instructions, and absorbance at 540 nm was measured using Multiskan-Ascent (Tecan NanoQuant 200, Tecan, Switzerland). Western Blot Cell lysates were mixed with 5 × SDS sample buffer and denatured for 10 min at 95 • C. Next, the protein mixture was loaded into an 8% SDS-PAGE gel, then transferred to the NC membranes (Pall, St. Show Low, AZ, USA) at 200 mA for 1 h. Subsequently, the membranes were blocked with TBST buffer containing 5% BSA or skimmed milk powder for 1.5 h at room temperature, then incubated with anti-STAT3, anti-ERK1/2 (Proteintech, Rosemont, IL, USA), anti-pSTAT3 (Huabio, Hangzhou, China), anti-pERK1/2, anti-β-actin (ABclonal, Wuhan, China) antibodies overnight at 4 • C. On the second day, the membranes were washed with TBST, incubated with goat anti-rabbit secondary antibodies (Yeasen, Shanghai, China) for 1 h at room temperature, and photographed using the Odyssey CLx image system (Li-cor, Lincoln, NE, USA). Finally, the gray value intensities of western blot results were measured by ImageJ software. Statistical Analysis Data are presented as mean ± standard error of the mean (SEM) of three repeated experiments. Statistical significance was analyzed using Student's t-test or one-way analysis of variance (ANOVA); p < 0.05 indicated significant difference, and p < 0.01 was considered as indicating extremely significant difference. Conclusions To sum up, rmaIL-6 and rmasIL-6R have biological activity and activate the JAK/STAT3 pathway and the expression of downstream genes. In L8824 cells, IL-6 classical signaling activated both JAK/STAT3 and MEK/ERK pathways, whereas trans-signaling activated only the JAK/STAT3 pathway. In CIK cells, IL-6 trans-signaling activated both JAK/STAT3 and MEK/ERK pathways. In primary hepatocytes, IL-6 classical signaling and trans-signaling only activated the JAK/STAT3 pathway. Therefore, IL-6 mainly acts by activating the JAK/STAT3 pathway. In addition, we demonstrated that activation of the JAK2/STAT3 pathways is essential for IL-6 trans-signaling-induced socs3a and socs3b production in L8824 cells and CIK cells. This study adds to the understanding of the regulation mechanisms of IL-6 classical and trans-signaling in fish, enriches our knowledge of fish immunology, and provides a theoretical basis for the prevention and treatment of fish diseases in the future. Institutional Review Board Statement: We have adhered to all local, national and international regulations and conventions, and we respected normal scientific ethical practices. The specimen used in this study comes from a population that was part of commercially fished individuals intended for human consumption. The animal protocol was approved by the Institutional Animal Care and Use Ethics Committee of Huazhong Agricultural University (Wuhan, China) (HZAUFI-2020-0015). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: All datasets generated for this study are included in the article/ Supplementary Materials. Conflicts of Interest: The authors declare no conflict of interest.
5,415.2
2022-02-01T00:00:00.000
[ "Biology" ]
Correctness of Sequential Monte Carlo Inference for Probabilistic Programming Languages Probabilistic programming is an approach to reasoning under uncertainty by encoding inference problems as programs. In order to solve these inference problems, probabilistic programming languages (PPLs) employ different inference algorithms, such as sequential Monte Carlo (SMC), Markov chain Monte Carlo (MCMC), or variational methods. Existing research on such algorithms mainly concerns their implementation and efficiency, rather than the correctness of the algorithms themselves when applied in the context of expressive PPLs. To remedy this, we give a correctness proof for SMC methods in the context of an expressive PPL calculus, representative of popular PPLs such as WebPPL, Anglican, and Birch. Previous work have studied correctness of MCMC using an operational semantics, and correctness of SMC and MCMC in a denotational setting without term recursion. However, for SMC inference—one of the most commonly used algorithms in PPLs as of today—no formal correctness proof exists in an operational setting. In particular, an open question is if the resample locations in a probabilistic program affects the correctness of SMC. We solve this fundamental problem, and make four novel contributions: (i) we extend an untyped PPL lambda calculus and operational semantics to include explicit resample terms, expressing synchronization points in SMC inference; (ii) we prove, for the first time, that subject to mild restrictions, any placement of the explicit resample terms is valid for a generic form of SMC inference; (iii) as a result of (ii), our calculus benefits from classic results from the SMC literature: a law of large numbers and an unbiased estimate of the model evidence; and (iv) we formalize the bootstrap particle filter for the calculus and discuss how our results can be further extended to other SMC algorithms. Introduction operational semantics of an expressive functional PPL calculus based on the operational formalization in Borgström et al. [6], representative of common PPLs. The operational semantics assign to each pair of term t and initial random trace (sequences of random samples) a non-negative weight. This weight is accumulated during evaluation through a weight construct, which, in current calculi and implementations of SMC, is (implicitly) always followed by a resampling. To decouple resampling from weighting, we present our first contribution. (i) We extend the calculus from Borgström et al. [6] to include explicit resample terms, expressing explicit synchronization points for performing resampling in SMC. With this extension, we also define a semantics which limits the number of evaluated resample terms, laying the foundation for the remaining contributions. In Section 4, we define the probabilistic semantics of the calculus. The weight from the operational semantics is used to define unnormalized distributions t over traces and t over result terms. The measure t is called the target measure, and finding a representation of this is the main objective of inference algorithms. We give a formal definition of SMC inference based on Chopin [9] in Section 5. This includes both a generic SMC algorithm, and two standard correctness results from the SMC literature: a law of large numbers [9], and the unbiasedness of the likelihood estimate [32]. In Section 6, we proceed to present the main contributions. (ii) From the SMC formulation by Chopin [9], we formalize a sequence of distributions t n , indexed by n, such that t n allows for evaluating at most n resamples. This sequence is determined by the placement of resamples in t. Our first result is Theorem 1, showing that t n eventually equals t if the number of calls to resample is upper bounded. Because of the explicit resample construct, this also implies that, for all resample placements such that the number of calls to resample is upper bounded, t n eventually equals t . We further relax the finite upper bound restriction and investigate under which conditions lim n→∞ t n = t pointwise. In particular, we relate this equality to the dominated convergence theorem in Theorem 2, which states that the limit converges as long as there exists a function dominating the weights encountered during evaluation. This gives an alternative set of conditions under which t n converges to t (now asymptotically, in the number of resamplings n). The contribution is fundamental, in that it provides us with a sequence of approximating distributions t n of t that can be targeted by the SMC algorithm of Section 5. As a consequence, we can extend the standard correctness results of that section to our calculus. This is our next contribution. (iii) Given a suitable sequence of transition kernels (ways of moving between the t n ), we can correctly approximate t n with the SMC algorithm from Section 5. The approximation is correct in the sense of Section 5: the law of large numbers and the unbiasedness of the likelihood estimate holds. As a consequence of (ii), SMC also correctly approximates t , and in turn the target measure t . Crucially, this also means estimating the model evidence (likelihood), which allows for compositionality [18] and comparisons between different models [37]. This contribution is summarized in Theorem 3. Related to the above contributions,Ścibior et al. [40] formalizes SMC and MCMC inference as transformations over monadic inference representations using a denotational approach (in contrast to our operational approach). They prove that their SMC transformations preserve the measure of the initial representation of the program (i.e., the target measure). Furthermore, their formalization is based on a simply-typed lambda calculus with primitive recursion, while our formalization is based on an untyped lambda calculus which naturally supports full term recursion. Our approach is also rather more elementary, only requiring basic measure theory compared to the relatively heavy mathematics (category theory and synthetic measure theory) used by them. Regarding generalizability, their approach is both general and compositional in the different inference transformations, while we abstract over parts of the SMC algorithm. This allows us, in particular, to relate directly to standard SMC correctness results. Section 7 concerns the instantiation of the transition kernels from (iii), and also discusses other SMC algorithms. Our last contribution is the following. (iv) We define a sequence of sub-probability kernels k t,n induced by a given program t, corresponding to the fundamental SMC algorithm known as the bootstrap particle filter (BPF) for our calculus. This is the most common version of SMC, and we present a concrete SMC algorithm corresponding to these kernels. We also discuss other SMC algorithms and their relation to our formalization: the resample-move [14], alive [24], and auxiliary [35] particle filters. Importantly, by combining the above contributions, we justify that the implementation strategies of the BPFs in WebPPL, Anglican, and Birch are indeed correct. In fact, our results show that the strategy in Anglican, in which every evaluation path must resample the same number of times, is too conservative. Detailed proofs for many lemmas found in the paper are available in the appendix. These lemmas are explicitly marked with † . A Motivating Example from Phylogenetics In this section, we give a motivating example from phylogenetics. The example is written in a functional PPL 3 developed as part of this paper, in order to verify and experiment with the presented concepts and results. In particular, this PPL supports SMC inference (Algorithm 2) with decoupled resamples and weights 4 , as well as sampling from random distributions with a sample construct. Consider the program in Fig. 1, encoding a simplified version of a phylogenetic birth-death model (see Ronquist et al. [37] for the full version). The problem is to find the model evidence for a particular birth rate (lambda = 0.2) and death rate (mu = 0.1), given an observed phylogenetic tree. The tree represents known lineages of evolution, where the leaves are extant (surviving to the present) species. Most importantly, for illustrating the usefulness of the results in this paper, the recursive function simBranch, with its two weight applications #1 and #2, is called a random number of times for each branch in the observed tree. Thus, different SMC executions encounter differing numbers of calls to weight. When resampling is performed after every call to weight (#1, #2, and #3), it is, because of the differing numbers of resamples, not obvious that inference is correct (e.g., the equivalent program in Anglican gives a runtime error). Our results show that such a resampling strategy is indeed correct. This strategy is far from optimal, however. For instance, only resampling at #3, which is encountered the same number of times in each execution, performs much better [26,37]. Our results show that this is correct as well, and that it gives the same asymptotic results as the naive strategy in the previous paragraph. Another strategy is to resample only at #1 and #3, again causing executions to encounter differing numbers of resamples. Because #1 weights with (log) 0, this approach gives the same accuracy as resampling only at #3, but avoids useless computation since a zero-weight execution can never obtain non-zero weight. 6 D. Lundén et al. Equivalently to resampling at #1, zero-weight executions can also be identified and stopped automatically at runtime. This gives a direct performance gain, and both are correct by our results. We compared the three strategies above for SMC inference with 50 000 particles 5 : resampling at #1,#2, and #3 resulted in a runtime of 15.0 seconds, at #3 in a runtime of 12.6 seconds, and at #1 and #3 in a runtime of 11.2 seconds. Furthermore, resampling at #1,#2, and #3 resulted in significantly worse accuracy compared to the other two strategies [26,37]. Summarizing the above, the results in this paper ensure correctness when exploring different resampling placement strategies. As just demonstrated, this is useful, because resampling strategies can have a large impact on SMC accuracy and performance. A Calculus for Probabilistic Programming Languages In this section, we define the calculus used throughout the paper. In Section 3.1, we begin by defining the syntax, and demonstrate how simple probability distributions can be encoded using it. In Section 3.2, we define the semantics and demonstrate it on the previously encoded probability distributions. This semantics is used in Section 4 to define the target measure for any given program. In Section 3.3, we extend the semantics of Section 3.2 to limit the number of allowed resamples in an evaluation. This extended semantics forms the foundation for formalizing SMC in Sections 6 and 7. Syntax The main difference between the calculus presented in this section and the standard untyped lambda calculus is the addition of real numbers, functions operating on real numbers, a sampling construct for drawing random values from real-valued probability distributions, and a construct for weighting executions. The rationale for making these additions is that, in addition to discrete probability distributions, continuous distributions are ubiquitous in most real-world models, and the weighting construct is essential for encoding inference problems. In order to define the calculus, we let X be a countable set of variable names; D ∈ D range over a countable set D of identifiers for families of probability distributions over R, where the family for each identifier D has a fixed number of real parameters |D|; and g ∈ G range over a countable set G of identifiers for real-valued functions with respective arities |g|. More precisely, for each g, there is a measurable function σ g : R |g| → R. For simplicity, we often use g to denote both the identifier and its measurable function. We can now give an inductive definition of the abstract syntax, consisting of values v and terms t. Here, c ∈ R, x ∈ X, D ∈ D, g ∈ G. We denote the set of all terms by T and the set of all values by V. The formal semantics is given in Section 3.2. Here, we instead give an informal description of the various language constructs. Some examples of distribution identifiers are N ∈ D, the identifier for the family of normal distributions, and U ∈ D, the identifier for the family of continuous uniform distributions. The semantics of the term sample N (0, 1) is, informally, "draw a random sample from the normal distribution with mean 0 and variance 1". The weight construct is illustrated later in this section, and we discuss the resample construct in detail in Sections 3.3 and 6. We use common syntactic sugar throughout the paper. Most importantly, we use false and true as aliases for 0 and 1, respectively, and () (unit) as another alias for 0. Furthermore, we often write g ∈ G as infix operators. For instance, 1 + 2 is a valid term, where + ∈ G. Now, let R + denote the non-negative reals. We define f D : R |D|+1 → R + as the function f D ∈ G such that f D (c 1 , . . . , c |D| , ·) is the probability density (continuous distribution) or mass function (discrete distribution) for the probability distribution corresponding to D ∈ D and (c 1 , . . . , c |D| ). 2 ·x 2 is the standard probability density of the normal distribution with mean 0 and variance 1. Lastly, we will also use let bindings, let rec bindings, sequencing using ;, and lists (all of which can be encoded in the calculus). Sequencing is required for the side-effects produced by weight (see Definition 5) and resample (see Sections 3.3 and 6). The explicit if expressions in the language deserve special mention-as is well known, they can also be encoded in the lambda calculus. The reason for explicitly including them in the calculus is to connect the lambda calculus to the continuous parts of the language. That is, we need a way of making control flow depend on the result of calculations on real numbers (e.g., if c1 < c2 then t1 else t2, where c 1 and c 2 are real numbers). An alternative to adding if-expressions is to let comparison functions in G return Church Booleans, but this requires extending the codomain of primitive functions. We now consider a set of examples. In Section 3.2 and Section 4.3 these examples will be further considered to illustrate the semantics, and target measure, respectively. Here, we first give the syntax, and informally discuss and visualize the probability distributions (i.e., the target measures, as we will see in Section 4.3) for the examples. First, consider the program in Fig. 2a. This program encodes a slight variation on the standard geometric distribution: flip a coin with bias 0.6 (i.e., the flip will result in heads, or true, 60% of the time) until a flip results in tails (false). The probability distribution is over the number of flips before encountering tails (including the final tails flip), and is illustrated in Fig. 2b. The Beta(2, 2) distribution as a program in (a), and visualized with a solid line in (c). Also, the program t obs in (b), visualized with a dashed line in (c). The iter function in (b) simply maps the given function over the given list and returns (). That is, it calls observe true, observe false, and observe true purely for the side-effect of weighting. The geometric distribution is a discrete distribution, meaning that the set of possible outcomes is countable. We can also encode continuous distributions in the language. Consider first the program in Fig. 3a, directly encoding the Beta(2, 2) distribution, illustrated in Fig. 3c. This distribution naturally represents the uncertainty in the bias of a coin-in this case, the coin is most likely unbiased (bias 0.5), and biases closer to 0 and 1 are less likely. In Fig. 3b, we extend Fig. 3a by observing the sequence [true, false, true] when flipping the coin. These observations are encoded using the weight construct, which simply accumulates a product (as a side-effect) of all real-valued arguments given to it throughout the execution. First, recall the standard mass function (σ fBern (p, true) = p; σ fBern (p, false) = (1 − p); σ fBern (p, x) = 0 otherwise) for the Bernoulli distribution corresponding to f Bern ∈ G. The observations [true, false, true] are encoded using the observe function, which uses the weight construct internally to assign weights to the current value p according to the Bernoulli mass function. As an example, assume we have drawn p = 0.4. The weight for this execution is σ fBern (0.4, true) · σ fBern (0.4, false) · σ fBern (0.4, true) = 0.4 2 · 0.6. Now consider p = 0.6 instead. For this value of p the weight is instead 0.6 2 · 0.4. This explains the shift in Fig. 3c-a bias closer to 1 is more likely, since we have observed two true flips, but only one false. Semantics In this section, we define the semantics of our calculus. The definition is split into two parts: a deterministic semantics and a stochastic semantics. We use evaluation contexts to assist in defining our semantics. The evaluation contexts E induce a call-by-value semantics, and are defined as follows. We denote the set of all evaluation contexts by E. With the evaluation contexts in place, we proceed to define the deterministic semantics through a small-step relation → Det . The rules are straightforward, and will not be discussed in further detail here. We use the standard notation for transitive and reflexive closures (e.g. → * Det ), and transitive closures (e.g. → + Det ) of relations throughout the paper. Following the tradition of Kozen [23] and Park et al. [34], sampling in our stochastic semantics works by consuming randomness from a tape of real numbers. We use inverse transform sampling, and therefore the tape consists of numbers from the interval [0, 1]. In order to use inverse transform sampling, we require that for each D ∈ D, there exists a measurable function F −1 is the inverse cumulative distribution function for the probability distribution corresponding to D and (c 1 , . . . , c |D| ). We call the tape of real numbers a trace, and make the following definition. We use the notation (c 1 , c 2 , . . . , c n ) S to indicate the trace consisting of the n numbers c 1 , c 2 , . . . , c n . Given a trace s, we denote by |s| the length of the trace. We also denote the concatenation of two traces s and s with s * s . Lastly, we let c :: s denote the extension of the trace s with the real number c as head. With the traces and F −1 D defined, we can proceed to the stochastic 6 semantics → over T × R + × S. The rule (Det) encapsulates the → Det relation, and states that terms can move deterministically only to terms of the form t stop . Note that terms of the form t stop are found at the left-hand side in the other rules. The (Sample) rule describes how random values are drawn from the inverse cumulative distribution functions and the trace when terms of the form sample D (c 1 , . . . , c |D| ) are encountered. Similarly, the Weight rule determines how the weight is updated when weight(c) terms are encountered. Finally, the resample construct always evaluates to unit, and is therefore meaningless from the perspective of this semantics. We elaborate on the role of the resample construct in Section 3.3. With the semantics in place, we define two important functions over S for a given term. In the below definition, assume that a fixed term t is given. Intuitively, r t is the function returning the result value after having repeatedly applied → on the initial trace s. Analogously, f t gives the density or weight of a particular s. Note that, if (t, 1, s) gets stuck or diverges, the result value is (), and the weight is 0. In other words, we disregard such traces entirely, since we are in practice only interested in probability distributions over values. Furthermore, note that if the final s = () S , the value and weight are again () and 0, respectively. The motivation for this is discussed in Section 4.3. To illustrate r t and f t , first consider the geometric program t geo in Fig. 2a, and a trace s = (0.5, 0.3, 0.7) S . Let E = if [·] then 1 + geometric () else 1. It is easy to check that t geo → + Det E[sample Bern (0.6)]. Now, note that, since Bern(0.6) is the probability distribution for flipping a coin with bias 0.6, As such, we have It follows that r tgeo (s) = 3, and that f tgeo (s) = 1. Now, instead consider the trace s 2 = (0.5, 0.7, 0.3) S . We have The term is now stuck, and because we have not used up the entire trace, we have r tgeo (s 2 ) = (), f tgeo (s 2 ) = 0. The opposite of the above can also occurgiven the trace s 3 = (0.5, 0.3) S , it holds that r tgeo (s 3 ) = () and f tgeo (s 3 ) = 0, since the provided trace is not long enough. In general, we have that r tgeo (s) = n and f tgeo (s) = 1 whenever s ∈ [0, 0.6] n−1 × (0.6, 1]. Otherwise, r tgeo (s) = () and f tgeo (s) = 0. We will apply this conclusion when reconsidering this example in Section 4.3. To illustrate the weight construct, consider the program t obs in Fig. 3b, and the singleton trace (0.8) S . This program will, in total, evaluate one call to sample, and three calls to weight. Now, let h(c) = F −1 Beta (2, 2, c) and recall the function σ fBern from Section 3.1. Using the notation φ(c, x) = σ fBern (h(c), x), we have, for some evaluation contexts That is, r t obs ((0.8) S ) = h(0.8) and f t obs ((0.8) S ) = h(0.8) 2 (1 − h(0.8)). For arbitrary c, we see that r t obs ((c) S ) = h(c) and f t obs ((c) S ) = h(c) 2 (1 − h(c)). For any other trace s with |s| = 1, r t obs (s) = () and f t obs (s) = 0. We will apply this result when reconsidering this example in Section 4.3. Resampling Semantics In order to connect SMC in PPLs to the classical formalization of SMC presented in Section 5-and thus enabling the theoretical treatments in Sections 6 and 7we need a relation in which terms "stop" after a certain number n of encountered resample terms. In this section, we define such a relation, denoted by →. Its definition is given below. This relation is → extended with a natural number n, indicating how many further resample terms can be evaluated. We implement this limitation by replacing the rule (Resample) of → with (Resample-Fin) of → above which decrements n each time it is applied, causing terms to get stuck at the n + 1th resample encountered. Now, assume that a fixed term t is given. We define r t,n and f t,n similar to r t and f t . As for r t and f t , these functions return the result value and weight, respectively, after having repeatedly applied → on the initial trace s. There is one difference compared to →: besides values, we now also allow stopping with non-zero weight at terms of the form E[resample]. To illustrate →, r t,n (s), and f t,n (s), consider the term t seq defined by This term encodes a model in which an object moves along a real-valued axis in discrete time steps, but where the actual positions (x 1 , x 2 , . . . ) can only be observed through a noisy sensor (c 1 , c 2 , . . . ). The inference problem consists of finding the probability distribution for the very last position, x t , given all collected observations (c 1 , c 2 , . . . , c t ). Most importantly, note the position of resample in (12)-it is evaluated just after evaluating weight in every folding step. Because of this, for n < t and all traces s such that f tseq ,n (s) > 0, we have [cn+1, cn+2, . . . , ct−1, ct] and where x n is the value sampled in sim at the nth folding step. That is, we can now "stop" evaluation at resamples. We will revisit this example in Section 6. The Target Measure of a Program In this section, we define the target measure induced by any given program in our calculus. We assume basic familiarity with measure theory, Lebesgue integration, and Borel spaces. McDonald and Weiss [28] provide a pedagogical introduction to the subject. We also summarize the definitions and lemmas used in this article in Appendix B.1. In order to define the target measure of a program as a Lebesgue integral (Section 4.3), we require a measure space on traces (Section 4.1), and a measurable space on terms (Section 4.2). For illustration, we derive the target measures for two of the example programs from Section 3 in Section 4.3. The concepts presented in this section are quite standard, and experienced readers might want to quickly skim it, or even skip it entirely. A Measure Space over Traces We use a standard measure space over traces of samples [27]. First, we define a measurable space over traces. We denote the Borel σ-algebra on R n with B n , and the Borel σ-algebra on [0, 1] with B n The most common measure on B n is the n-dimensional Lebesgue measure, denoted λ n . For n = 0, we let λ 0 = δ () S , where δ denotes the standard Dirac measure. By combining the Lebesgue measures for each n, we construct a measure µ S over (S, S). A Measurable Space over Terms In order to show that r t is measurable, we need a measurable space over terms. We let (T, T ) denote the measurable space that we seek to construct, and follow the approach in Staton et al. [42] and Vákár et al. [46]. Because our calculus includes the reals, we would like to at least have B ⊂ T . Furthermore, we would also like to extend the Borel measurable sets B n to terms with n reals as subterms. For instance, we want sets of the form This leads us to consider terms in a language in which constants (i.e., reals) are replaced with placeholders [·]. Most importantly, it is easy to verify that T p is countable. Next, we make the following definitions. Definition 13. For n ∈ N 0 , we denote by T n p ⊂ T p the set of all terms with exactly n placeholders. Definition 14. We let t n p range over the elements of T n p . The t n p can be regarded as functions t n p : R n → t n p (R n ) which replaces the n placeholders with the n reals given as arguments. From the above definitions, we construct the required σ-algebra T . The Target Measure We are now in a position to define the target measure. We will first give the formal definitions, and then illustrate the definitions with examples. The definitions rely on the following result. We can now proceed to define the measure t over S induced by a term t using Lebesgue integration. Importantly, by Lemma 15 and Lemma 2, it holds that the density f t is unique µ S -ae if t is σ-finite. Using Definition 17 and the measurability of r t , we can also define a corresponding pushforward measure t over T. The measure t is our target measure, i.e., the measure encoded by our program that we are interested in. Let us now consider the target measures for our earlier examples. Consider first the program in Fig. 2a. Recall that the density f tgeo of a given trace s is 1 if s ∈ [0, 0.6] n−1 × (0.6, 1], and 0 otherwise. Hence, we can write Since t geo is a distribution over N, we always have Consequently, As expected, by taking t geo ({1}), t geo ({2}), t geo ({3}), . . ., we exactly recover the graph from Fig. 2b. Now consider the continuous distribution given by program t obs , and recall The Beta distributions have strictly increasing cumulative distribution functions F Beta (a, b, ·) for all a and b. It follows that h is the true inverse of this function, and is therefore bijective 7 . Because of this, In the third equality, we have used integration by substitution. We also used the fact that (h −1 ) is the derivative of the cumulative distribution function We should in some way ensure the target measure is finite (i.e., can be normalized to a probability measure), since we are in the end most often only interested in probability measures. Unfortunately, as observed by Staton [41], there is no known useful syntactic restriction that enforces finite measures in PPLs while still admitting weights > 1. We will discuss this further in Section 6.2 in relation to SMC in our calculus. Lastly, from Section 3.2, recall that we disallow non-empty final traces in f t and r t . We see here why this is needed: if allowed, for every trace s with f t (s) > 0, all extensions s * s have the same density f t (s * s ) = f t (s) > 0. From this, it is easy to check that if t = 0 (the zero measure), then t (T) = ∞ (i.e., the measure is not finite). In fact, for any T ∈ T , t (T ) > 0 =⇒ t (T ) = ∞. Clearly, this is not a useful target measure. Formal SMC In this section, we give a generic formalization of SMC based on Chopin [9]. We assume a basic understanding of SMC. For a concrete SMC example, see Appendix A. For a complete introduction to SMC, we recommend Naesseth et al. First, in Section 5.1, we introduce transition kernels, which is a fundamental concept used in the remaining sections of the paper. Second, in Section 5.2, we describe Chopin's generic formalization of SMC as an algorithm for approximating a sequence of distributions based on a sequence of approximating transition kernels. Lastly, in Section 5.3, we give standard correctness results for the algorithm. Preliminaries: Transition Kernels Intuitively, transition kernels describe how elements move between measurable spaces. For a more comprehensive introduction, see Vákár and Ong [47]. Definition 19. Let (A, A) and (A , A ) be measurable spaces, and let B * is measurable. Additionally, we can classify transition kernels according to the below definition. is a sub-probability measure for all a ∈ A; a probability kernel if k(a, ·) is a probability measure for all a ∈ A; and a finite kernel if sup a∈A k(a, A ) < ∞. Algorithm The starting point in Chopin's formulation of SMC is a sequence of probability measures π n (over respective measurable spaces (A n , A n ), with n ∈ N 0 ) that are difficult or impossible to directly draw samples from. Algorithm 1 A generic formulation of sequential Monte Carlo inference based on Chopin [9]. In each step, we let 1 ≤ j ≤ J, where J is the number of samples. The new empirical distribution is unweighted and is given by {â j n } J j=1 . This distribution also approximates πn. 4. Mutation: Increment n. The SMC approach is to generate samples from the π n by first sampling from a sequence of proposal measures q n , and then correcting for the discrepancy between these measures by weighting the proposal samples. The proposal distributions are generated from an initial measure q 0 and a sequence of transition kernels k n : In order to approximate π n by weighting samples from q n , we need some way of obtaining the appropriate weights. Hence, we require each measurable space (A n , A n ) to have a default σ-finite measure µ An , and the measures π n and q n to have densities f πn and f qn with respect to this default measure. Furthermore, we require that the functions f πn and f qn can be efficiently computed pointwise, up to an unknown constant factor per function and value of n. More precisely, we can efficiently compute the densities f πn = Z πn · f πn and f qn = Z qn · f qn , corresponding to the unnormalized measures π n = Z πn · π n and q n = Z qn · q n . Here, Z πn = π n (A n ) ∈ R + and Z qn = q n (A n ) ∈ R + denote the unknown normalizing constants for the distributions π n and q n . Algorithm 1 presents a generic version of SMC [9] for approximating π n . We make the notion of approximation used in the algorithm precise in Section 5.3. Note that in the correction step, the unnormalized pointwise evaluation of f πn and f qn is used to calculate the weights. In the algorithm description, we also use some new terminology. First, an empirical distribution is the discrete probability measure formed by a finite set of possibly weighted samples {(a j n , w j n )} J j=1 , where a j n ∈ A n and w j n ∈ R + . Second, when resampling an empirical distribution, we sample J times from it (with replacement), with each sample having its normalized weight as probability of being sampled. More specifically, this is known as multinomial resampling. Other resampling schemes also exist [11], and are often used in practice to reduce variance. After resampling, the set of samples forms a new empirical distribution with J unweighted (all w j n = 1) samples. An important feature of SMC compared to other inference algorithms is that SMC produces, as a by-product of inference, unbiased estimatesẐ πn of the normalizing constants Z πn . Stated differently, this means that Algorithm 1 not only approximates the π n , but also the unnormalized versions π n . From the weights w j n in Algorithm 1, the estimates are given bŷ for each π n . We give the unbiasedness result ofẐ πn in Lemma 5 (item 2) below. The normalizing constant is often used to compare the accuracy of different probabilistic models, and as such, it is also known as the marginal likelihood, or model evidence. For an example application, see Ronquist et al. [37]. To conclude this section, note that many sequences of probability kernels k n can be used to approximate the same sequence of measures π n . The only requirement on the k n is that f πn (a n ) > 0 =⇒ f qn (a n ) > 0 must hold for all n ∈ N 0 and a n ∈ A n (i.e., the proposals must "cover" the π n ) [12]. We call such a sequence of kernels k n valid. Different choices of k n induce different proposals q n , and hence capture different SMC algorithms. The most common example is the BPF, which directly uses the kernels from the model as the sequence of kernels in the SMC algorithm (hence the "bootstrap"). In Section 7.1, we formalize the bootstrap kernels in the context of our calculus. However, we may want to choose other probability kernels that satisfy the covering condition, since the choice of kernels can have major implications for the rate of convergence [35]. Correctness We begin by defining the notion of approximation used in Algorithm 1. Definition 21 (Based on Chopin [9, p. 2387]). Let (A, A) denote a measurable space, {{(a j,J , w j,J )} J j=1 } J∈N a triangular array of random variables in A × R, and π : A → R * + a probability measure. We say that {{(a j,J , w j, surely for all measurable functions ϕ : (A, A) → (R, B) such that E π (ϕ)-the expected value of the function ϕ over the distribution π-exists. First, note that the triangular array can also be viewed as a sequence of random empirical distributions (indexed by J). Precisely such sequences are formed by the random empirical distributions in Algorithm 1 when indexed by the increasing number of samples J. For simplicity, we often let context determine the sequence, and directly state that a random empirical distribution approximates some distribution (as in Algorithm 1). Two classical results in SMC literature are given in the following lemma: a law of large numbers and the unbiasedness of the normalizing constant estimate. We take these results as the definition of SMC correctness used in this paper. Lemma 5. Let π n , n ∈ N 0 , be a sequence of probability measures over measurable spaces (A n , A n ) with default σ-finite measures µ An , such that the π n have densities f πn with respect to these default measures. Furthermore, let q 0 be a probability measure with density f q0 with respect to µ A0 , and k n a sequence of probability kernels inducing a sequence of proposal probability measures q n , given by (18), over (A n , A n ) with densities f qn with respect to µ An . Also, assume the k n are valid, i.e., that that f πn (a n ) > 0 =⇒ f qn (a n ) > 0 holds for all n ∈ N 0 and a n ∈ A n . Then produced by Algorithm 1 approximate π n for each n ∈ N 0 ; and 2. E(Ẑ πn ) = Z πn for each n ∈ N 0 , where the expectation is taken with respect to the weights produced when running Algorithm 1, andẐ πn is given by (19 Chopin [9][Theorem 1] gives another SMC convergence result in the form of a central limit. This result, however, requires further restrictions on the weights w j n in Algorithm 1. It is not clear when these restrictions are fulfilled when applying SMC on a program in our calculus. This is an interesting topic for future work. Formal SMC for Probabilistic Programming Languages This section contains our main contribution: how to interpret the operational semantics of our calculus as the unnormalized sequence of measures π n in Chopin's formalization (Section 6.1), as well as sufficient conditions for this sequence of approximating measures to converge to t and for the normalizing constant estimate to be correct (Section 6.2). An important insight during this work was that it is more convenient to find an approximating sequence of measures t n to the trace measure t , compared to finding a sequence of measures t n directly approximating the target measure t . In Section 6.1, we define t n similarly to t , except that at most n evaluations of resample are allowed. This upper bound on the number of resamples is formalized through the relation → from Section 3.3. In Section 6.2, we obtain two different conditions for the convergence of the sequence t n to t : Theorem 1 states that for programs with an upper bound N on the number of resamples they evaluate, t N = t . This precondition holds in many practical settings, for instance where each resampling is connected to a datum collected before inference starts. Theorem 2 states another convergence result for programs without such an upper bound but with dominated weights. Because of these convergence results, we can often approximate t by approximating t n with Algorithm 1. When this is the case, Lemma 5 implies that Algorithm 1, either after a sufficient number of time steps or asymptotically, correctly approximates t and the normalizing constant Z t . This is the content of Theorem 3. We conclude Section 6.2 by discussing resample placements and their relation to Theorem 3, as well as practical implications of Theorem 3. The Sequence of Measures Generated by a Program We now apply the formalization from Section 4.3 again, but with f t,n and r t,n (from Section 3.3) replacing f t and r t . Intuitively, this yields a sequence of measures t n indexed by n, which are similar to t , but only allow for evaluating at most n resamples. To illustrate this idea, consider again the program t seq in (12). Here, t seq 0 is a distribution over terms of the form E 1 seq [resample; x 1 ], t seq 1 a distribution over terms of the form E 2 seq [resample; x 2 ], and so forth. For n ≥ t, t seq n = t seq , because it is clear that t is an upper bound on the number of resamples evaluated in t seq . While the measures t n are useful for giving intuition, it is easier from a technical perspective to define and work with t n , the sequence of measures over traces where at most n resamples are allowed. First, we need the following result, analogous to Lemma 4. Analogously to Definition 17, by Lemma 15 and Lemma 2, it holds that the density f t,n is unique µ S -ae if t n is σ-finite. We can now also clarify how the resample construct relates to the resampling in the selection step of Algorithm 1. If we approximate the sequence t n with Algorithm 1, at the nth selection step of the algorithm, all traces s with non-zero weight must have r t,n (s) = v or r t,n (s) = E[resample], by Definitions 8 and 9. That is, having a q n in Algorithm 1 proposing traces other than these is wasteful, since they will in any case have weight zero. We illustrate this further when considering the bootstrap kernel in Section 7.1. Correctness We begin with a convergence result for when the number of calls to resample in a program is upper bounded. This follows directly since f t,n not only converges to f t , but is also equal to f t for all n > N . However, even if the number of calls to resample in t is upper bounded, there is still one concern with using t n as π n in Algorithm 1: there is no guarantee that the measures t n can be normalized to probability measures and have unique densities (i.e., that they are finite). This is a requirement for the correctness results in Lemma 5. Unfortunately, recall from Section 4.3 that there is no known useful syntactic restriction that enforces finiteness of the target measure. This is clearly true for the measures t n as well, and as such, we need to make the assumption that the t n are finite-otherwise, it is not clear that Algorithm 1 produces the correct result, since the conditions in Lemma 5 are not fulfilled. Fortunately, this assumption is valid for most, if not all, models of practical interest. Nevertheless, investigating whether or not the restriction to probability measures in Lemma 5 can be lifted to some extent is an interesting topic for future work. Note that, even if the target measure is finite, this does not necessarily imply that all measures t n are finite. For example, consider the program let rec inflate = if sampleBern(0.5) then weight 2; 1 + inflate () else 0 in let deflate n = weight 1/2 n in let n = inflate () in resample; deflate n; n, adapted from [6]. Clearly, t 0 is not finite (in fact, it is not even σ-finite), while t 1 = t is. Although of limited practical interest, programs with an unbounded number of calls to resample are of interest from a semantic perspective. If we have lim n→∞ t n = t pointwise, then any SMC algorithm approximating the sequence t n also approximates t , at least asymptotically in the number of steps n. First, consider the variation t geo-res of the geometric program t geo in The only difference is the added resample (marked with a box). Here t geo = t geo-res , since, in general, t is unaffected by placing resamples in t. Note that t geo-res has no upper bound on the number of calls to resample, and therefore Theorem 1 is not applicable. We have, however, that and as a consequence, lim n→∞ t geo-res n = t geo-res pointwise. The question is then if lim n→∞ t n = t pointwise holds in general? The answer is no, as we demonstrate next. For lim n→∞ t n = t to hold pointwise, it must hold that lim n→∞ f t,n = f t pointwise µ S -ae. Unfortunately, this does not hold for all programs. Consider the program t loop defined by let rec loop _ = resample; loop () in loop (). Here, f t loop = 0 since the program diverges deterministically, but f t loop ,n (() S ) = 1 for all n. Because µ S ({() S }) = 0, we do not have lim n→∞ f t loop ,n = f t loop pointwise µ S -ae. Even if we have lim n→∞ f t,n = f t pointwise µ S -ae, we might not have lim n→∞ t n = t pointwise. Consider, for instance, the program t unit given by let s = sampleU (0, 1) in let rec foo n = if s ≤ 1/n then resample; weight 2; foo (2 · n) else weight 0 in foo 1 (22) We have f tunit = 0 and f tunit ,n = 2 n · 1 [0,1/2 n ] for n > 0. Also, lim n→∞ f tunit ,n = f tunit pointwise. However, This shows that the limit may fail to hold, even for programs that terminate almost surely, as is the case for the program t unit in (22). In fact, this program is positively almost surely terminating [7] since the expected number of recursive calls to foo is 1. Guided by the previous example, we now state the dominated convergence theorem-a fundamental result in measure theory-in the context of SMC inference in our calculus. Theorem 2. Assume that lim n→∞ f t,n = f t holds pointwise µ S -ae. Furthermore, assume that there exists a measurable function g : (S, S) → (R + , B + ) such that f t,n ≤ g µ S -ae for all n, and S g(s)dµ S (s) < ∞. Then lim n→∞ t n = t pointwise. For a proof, see McDonald and Weiss [28, Theorem 4.9]. It is easy to check that for our example in (22), there is no dominating and integrable g as is required in Theorem 2. We have already seen that the conclusion of the theorem fails to hold here. As a corollary, if there exists a dominating and integrable g, the measures t n are always finite. Corollary 1. If there exists a measurable function g : (S, S) → (R + , B + ) such that f t,n ≤ g µ S -ae for all n, and S g(s)dµ S (s) < ∞, then t n is finite for each n ∈ N 0 . This holds because t n (S) = S f t,n (s)dµ S (s) ≤ S g(s)dµ S (s) < ∞. Hence, we do not need to assume the finiteness of t n in order for Algorithm 1 to be applicable, as was the case for the setting of Theorem 1. In Theorem 3, we summarize and combine the above results with Lemma 5. Theorem 3. Let t be a term, and apply Algorithm 1 with t n as π n , and with arbitrary valid kernels k n . If the condition of Theorem 1 holds and t n is finite for each n ∈ N 0 , then Algorithm 1 approximates t and its normalizing constant after a finite number of steps. Alternatively, if the condition of Theorem 2 holds, then Algorithm 1 approximates t and its normalizing constant in the limit n → ∞. This follows directly from Theorem 1, Theorem 2, and Lemma 5. We conclude this section by discussing resample placements, and the practical implications of Theorem 3. First, we define a resample placement for a term t as the term resulting from replacing arbitrary subterms t of t with resample; t . Note that such a placement directly corresponds to constructing the sequence t n . Second, note that the measure t and the target measure t are clearly unaffected by such a placement-indeed, resample simply evaluates to (), and for t and t , there is no bound on how many resamples we can evaluate. As such, we conclude that all resample placements in t fulfilling one of the two conditions in Theorem 3 leads to a correct approximation of t when applying Algorithm 1. Furthermore, there is always, in practice, an upper bound on the number of calls to resample, since any concrete run of SMC has an (explicit or implicit) upper bound on its runtime. This is a powerful result, since it implies that when implementing SMC for PPLs, any method for selecting resampling locations in a program is correct under mild conditions (Theorem 1 or Theorem 2) that are most often, if not always, fulfilled in practice. Most importantly, this justifies the basic approach for placing resamples found in WebPPL, Anglican, and Birch, in which every call to weight is directly followed (implicitly) by a call to resample. It also justifies the approach to placing resamples described in Lundén et al. [26]. This latter approach is essential in, e.g., Ronquist et al. [37], in order to increase inference efficiency. Our results also show that the restriction in Anglican requiring all executions to encounter the same number of resamples, is too conservative. Clearly, this is not a requirement in either Theorem 1 or Theorem 2. For instance, the number of calls to resample varies significantly in (20). SMC Algorithms In this section, we take a look at how the kernels k n in Algorithm 1 can be instantiated to yield the concrete SMC algorithm known as the bootstrap particle filter (Section 7.1), and also discuss other SMC algorithms and how they relate to Algorithm 1 (Section 7.2). The Bootstrap Particle Filter We define for each term t a particular sequence of kernels k t,n , that gives rise to the SMC algorithm known as the bootstrap particle filter (BPF). Informally, these kernels correspond to simply continuing to evaluate the program until . For the bootstrap kernel, calculating the weights w j n from Algorithm 1 is particularly simple. Similarly to t n , it is more convenient to define and work with sequences of kernels over traces, rather than terms. We will define k t,n (s, ·) to be the subprobability measure over extended traces s * s resulting from evaluating the term r t,n−1 (s) until the next resample or value v, ignoring any call to weight. First, we immediately have that the set of all traces that do not have s as prefix must have measure zero. To make this formal, we will use the inverse images of the functions prepend s (s ) = s * s , s ∈ S in the definition of the kernel. The next ingredient for defining the kernels k t,n is a function p t,n that indicates what traces are possible when executing t until the n + 1th resample or value. The proof is analogous to that of Lemma 6. We can now formally define the kernels k t,n . Definition 24. k t,n (s, S) = prepend −1 s (S) p rt,n−1(s),1 (s ) dµ S (s ) By the definition of p t,n , the k t,n are sub-probability kernels rather than probability kernels. Intuitively, the reason for this is that during evaluation, terms can get stuck, deterministically diverge, or even stochastically diverge. Such traces are assigned 0 weight by p t,n . Lemma 9. The functions k t,n : S × S → R + are sub-probability kernels. †8 We get a natural starting measure q 0 from the sub-probability distribution resulting from running the initial program t until reaching a value or a call to resample, ignoring weights. Definition 25. t 0 (S) = S p t,0 (s)dµ S (s). Now we have all the ingredients for the general SMC algorithm described in Section 5.2: a sequence of target measures t n = π n (Definition 22), a starting measure t 0 ∝ q 0 (Definition 25), and a sequence of kernels k t,n ∝ k n (Definition 24). These then induce a sequence of proposal measures t n = q n as in Equation (18), which we instantiate in the following definition. Definition 26. t n (S) = S k t,n (s, S)f t,n−1 (s)dµ S (s), n > 0 Intuitively, the measures t n are obtained by evaluating the terms in the support of the measure t n−1 until reaching the next resample or value. For an efficient implementation, we need to factorize this definition into the history and the current step, which amounts to splitting the traces. Each feasible trace can be split in such a way. Since the kernels k t,n are sub-probability kernels, the measures t n are finite given that the t n are finite. Lemma 12. t 0 is a sub-probability measure. Also, if t n−1 is finite, then t n is finite. † As discussed in Section 6.2, the t n are finite, either by assumption (Theorem 1) or as a consequence of the dominating function of Theorem 2. From this and Lemma 12, the t n are also finite. Furthermore, checking that t n are valid, i.e. that the density f t n of each t n covers the density f t n of t n is trivial. As such, by Lemma 5, we can now correctly approximate t n using Algorithm 1. The details are given in Algorithm 2, which closely resembles the standard SMC algorithm in WebPPL. For ease of notation, we assume it possible to draw samples from t 0 and k t,n (s, ·), even though these are sub-probability measures. This essentially corresponds to assuming evaluation never gets stuck or diverges. Making sure this is the case is not within the scope of this paper. The weights in Algorithm 2 at time step n can easily be calculated according to the following lemma. Here, s * s = s is the unique decomposition from Lemma 10. † Finally, it is now obvious how the resample construct relates to the resampling in the selection step in Algorithm 2-only traces for which r t,n (s j n ) is a term of the form E[resample], or a value, will issue from the mutation step and thus participate in resampling at the selection step. As a consequence of how the kernels k t,n are constructed, we only stop at such terms in steps (1) and (5) when running the program. This is the reason for naming the construct resample. Algorithm 2 A concrete instantiation of Algorithm 1 with π n = t n , k n ∝ k t,n , q 0 ∝ t 0 , and as a consequence q n = t n (for n > 0). In each step, we let 1 ≤ j ≤ J, where J is the number of samples. 1. Initialization: Set n = 0. Draw s j 0 ∼ t 0 for 1 ≤ j ≤ J. That is, run the program t, and draw from U(0, 1) whenever required by a sample D . Record these draws as the trace s j 0 . Stop when reaching a term of the form As a consequence of Lemma 13, this is trivial. Simply set w j n to the weight accumulated while running t in step (1), or rt,n−1(ŝ j n−1 ) in step (5). The empirical distribution given by {(s j n , w j n )} J j=1 approximates t n/Z t n . 3. Termination: If all samples rt(s j n ) are values, terminate and output {(s j n , w j n )} J j=1 . If not, go to the next step. We cannot evaluate values further, so running the algorithm further if all samples are values is pointless. When terminating, assuming the conditions in Theorem 1 or Theorem 2 holds, {(s j n , w j n )} J j=1 approximates t /Z t n . Also, by the definition of t , {(rt(s j n ), w j n )} J j=1 approximates t /Z t n , the normalized version of t . 4. Selection: Resample the empirical distribution {(s j n , w j n )} J j=1 . The new empirical distribution is unweighted and given by {ŝ j n } J j=1 . This distribution also approximates t n/Z t n . 5. Mutation: Increment n. Draw s j n ∼ kt,n(ŝ j n−1 , ·) for 1 ≤ j ≤ J. That is, simply run the intermediate program rt,n−1(ŝ j n−1 ), and draw from U(0, 1) whenever required by a sample D . Record these draws and append them toŝ j n−1 , resulting in the trace s j n . Stop when reaching a term of the form E[resample] or a value v. The empirical distribution {s j n } J j=1 approximates t n/Z t n . Go to (2). Other SMC Algorithms In this section, we discuss SMC algorithms other than the BPF. First, we have the resample-move algorithm by Gilks and Berzuini [14], which is also implemented in WebPPL [16], and treated by Chopin [9] andŚcibior et al. [40]. In this algorithm, the SMC kernel is composed with a suitable MCMC kernel, such that one or more MCMC steps are taken for each sample after each resampling. This helps with the so-called degeneracy problem in SMC, which refers to the tendency of SMC samples to share a common ancestry as a result of resampling. We can directly achieve this algorithm in our context by simply choosing appropriate transition kernels in Algorithm 1. Let k MCMC,n be MCMC transition kernels with π n−1 = t n−1 as invariant distributions. Using the bootstrap kernels as the main kernels, we let k n = k t,n • k MCMC,n where • denotes kernel composition. The sequence k n is valid because of the validity of the main SMC kernels and the invariance of the MCMC kernels. While Algorithm 1 captures different SMC algorithms by allowing the use of different kernels, some algorithms require changes to Algorithm 1 itself. The first such variation of Algorithm 1 is the alive particle filter, recently discussed by Kudlicka et al. [24], which reduces the tendency to degeneracy by not including sample traces with zero weight in resampling. This is done by repeating the selection and mutation steps (for each sample individually) until a trace with non-zero weight is proposed; the corresponding modifications to Algorithm 1 are straightforward. The unbiasedness result of Kudlicka et al. [24] can easily be extended to our PPL context, with another minor modification to Algorithm 1. Another variation of Algorithm 1 is the auxiliary particle filter [35]. Informally, this algorithm allows the selection and mutation steps of Algorithm 1 to be guided by future information regarding the weights w n . For many models, this is possible since the weighting functions w n from Algorithm 1 are often parametric in an explicitly available sequence of observation data points, which can also be used to derive better kernels k n . Clearly, such optimizations are model-specific, and can not directly be applied in expressive PPL calculi such as ours. However, the general idea of using look-ahead in general-purpose PPLs to guide selection and mutation is interesting, and should be explored. Related Work The only major previous work related to formal SMC correctness in PPLs iś Scibior et al. [40] (see Section 1). They validate both the BPF and the resamplemove SMC algorithms in a denotational setting. In a companion paper,Ścibior et al. [39] also give a Haskell implementation of these inference techniques. Although formal correctness proofs of SMC in PPLs are sparse, there are many languages that implement SMC algorithms. Goodman and Stuhlmüller [17] describe SMC for the probabilistic programming language WebPPL. They implement a basic BPF very similar to Algorithm 2, but do not show correctness with respect to any language semantics. Also, related to WebPPL, Stuhlmüller et al. [43] discuss a coarse-to-fine SMC inference technique for probabilistic programs with independent sample statements. Wood et al. [50] describe PMCMC, an MCMC inference technique that uses SMC internally, for the probabilistic programming language Anglican [44]. Similarly to WebPPL, Anglican also includes a basic BPF similar to Algorithm 2, with the exception that every execution needs to encounter the same number of calls to resample. They use various types of empirical tests to validate correctness, in contrast to the formal proof found in this paper. Related to Anglican, a brief discussion on resample placement requirements can be found in van de Meent et al. [48]. Birch [31] is an imperative object-oriented PPL, with a particular focus on SMC. It supports a number of SMC algorithms, including the BPF [19] and the auxiliary particle filter [35]. Furthermore, they support dynamic analytical optimizations, for instance using locally-optimal proposals and Rao-Blackwellization [30]. As with WebPPL and Anglican, the focus is on performance and efficiency, and not on formal correctness. There are quite a few papers studying the correctness of MCMC algorithms for PPLs. Using the same underlying framework as for their SMC correctness proof,Ścibior et al. [40] also validates a trace MCMC algorithm. Another proof of correctness for trace MCMC is given in Borgström et al. [6], which instead uses an untyped lambda calculus and an operational semantics. Much of the formalization in this paper is based on constructions used as part of their paper. For instance, the functions f t and r t are defined similarly, as well as the measure space (S, S, µ S ) and the measurable space (T, T ). Our measurability proofs of f t , r t , f t,n , and r t,n largely follow the same strategies as found in their paper. Similarly to us, they also relate their proof of correctness to classical results from the MCMC literature. A difference is that we use inverse transform sampling, whereas they use probability density functions. As a result of this, our traces consist of numbers on [0, 1], while their traces consist of numbers on R. Also, inverse transform sampling naturally allows for built-in discrete distributions. In contrast, discrete distributions must be encoded in the language itself when using probability densities. Another difference is that they restrict the arguments to weight to [0, 1], in order to ensure the finiteness of the target measure. Other Classical work on SMC includes Chopin [9], which we use as a basis for our formalization. In particular, Chopin [9] provides a general formulation of SMC, placing few requirements on the underlying model. The book by Del Moral [10] contains a vast number of classical SMC results, including the law of large numbers and unbiasedness result from Lemma 5. A more accessible summary of the important SMC convergence results from Del Moral [10] can be found in Naesseth et al. [32]. Conclusions In conclusion, we have formalized SMC inference for an expressive functional PPL calculus, based on the formalization by Chopin [9]. We showed that in this context, SMC is correct in that it approximates the target measures encoded by programs in the calculus under mild conditions. Furthermore, we illustrated a particular instance of SMC for our calculus, the bootstrap particle filter, and discussed other variations of SMC and their relation to our calculus. As indicated in Section 2, the approach used for selecting resampling locations can have a large impact on SMC accuracy and performance. This leads us to the following general question: can we select optimal resampling locations in a given program, according to some formally defined measure of optimality? We leave this important research direction for future work. A SMC: an Illustrative Example In order to fully appreciate the contributions of this paper, we devote this section to introducing SMC inference for the unfamiliar with an informal example. The example is based on Lindholm [25]. A.1 Model Consider the following scenario: a pilot is flying an aircraft in bad weather with zero visibility, and is attempting to estimate the aircraft's position. In order to do this, available is an elevation map of the area, a noisy altimeter, and a noisy sensor for measuring the vertical distance to the ground (see Fig. 4 for an illustration). Concretely, assume that (a) X 0:t = X 0 , X 1 , . . . , X t are real-valued random variables representing the true horizontal position of the aircraft at the discrete time steps 0, 1, . . . , t, and (b) Y 0:t = Y 0 , Y 1 , . . . , Y t are real-valued random variables for the measurements given by subtracting the vertical distance sensor reading from the altimeter sensor reading. The problem we consider is to estimate the positions X n , n ≤ t, given all combined sensor measurements Y 0:n collected up until time n. This random variable is denoted X n | Y 0:n , and the distribution for this random variable is known as the target measure. In general, X | Y denotes the random variable X conditioned on Y having been observed. Concretely, we assume the following model for n ∈ N: In other words, we have that the initial position X 0 of the aircraft is uniformly distributed between 0 and 100, and at each time step n, X n is normally distributed around X n−1 + 2 with variance 1 (the conditional distribution of X n | X n−1 is known as a transition kernel ). Finally the combined measurement Y n from the sensors is normally distributed around the true elevation of the ground at the current horizontal position X n with variance 2, where the true position is given by our elevation map, here modeled as a function elevation. A.2 Inference With the model in place, we can proceed to sequentially estimating the probability distributions for the random variables X n | Y 0:n using the BPF, a fundamental SMC algorithm. In Section 7.1, we will give a formal definition of this algorithm for models encoded in our calculus. Here, we instead give an informal description for our current aircraft model. In Fig. 4, we show the true initial aircraft position (1), and the true position at three later time steps, denoted by (2), (3), and (4). In addition, for each of these time steps, we show the empirical SMC approximations to the distributions for X n | Y 0:n , where n is increasing for each of the four positions. Step (1) 3) Next, we take the set of weighted particles from the previous time step and resample them according to their weights. That is, we draw (with replacement) a set of new samples from the previous set of samples, based on their relative weights. We see that the samples with high weight are indeed the ones to survive this resampling step. Note that after resampling, we also reset the weights (which is required for correctness). (1.4) For each sample of X 0 , draw from the distribution of X 1 | X 0 to propagate it forwards by one time step. (2) At this point, we have completed many iterations of the above four substeps-the exception being that in the first sub-step, we don't draw from U(0, 100), but instead reuse the set of particles from the previous step. We see that the set of samples now correctly cluster on the true position. have diverged slightly, representing the increased uncertainty in the aircraft's position. (4) When encountering more varied terrain once again, the uncertainty is reduced, and the set of samples again cluster more closely on the true position. The key step in every SMC algorithm is the resampling step illustrated above. Resampling allows for focusing the empirical approximations on regions of the sample space with high probability, yielding efficient inference for many models of practical interest. For instance, SMC is commonly used in tracking problems [1,20]. It is also possible to encode the example as a program in the calculus from Section 3. This is done in Fig. 5. The real numbers c 0 , c 1 , c 2 , . . . , c t in the program correspond to the observations of Y 0:t . B Definitions and Proofs In this appendix, we prove lemmas found throughout the main article. First, we introduce measure theory and Borel spaces (Section B.1), and define pointwise convergence of functions (Section B.2). Then, we introduce metric spaces and their properties (Section B.3), and look closer at the measure space (S, S, µ S ) (Section B.4) and the measurable space (T, T ) (Section B.5). In Section B.6 and Section B.7, we establish further results required for proving the measurability of r t and f t (Section B.8), and r t,n and f t,n (Section B.9). Lastly, we look at the bootstrap particle filter kernels k t,n and induced proposal measures t n (Section B.10). B.1 Preliminaries: Measure Theory and Borel Spaces This section gives fundamental definitions and lemmas from measure theory, and defines Borel spaces. For a more pedagogical introduction to the subject, we recommend McDonald and Weiss [28]. Definition 27. Let A be a set. We say that Definition 28 . Let (A, A) and (A , A ) be measurable spaces. A function f : To indicate that a function is measurable with respect to specific measurable spaces, we write f : (A, A) → (A , A ). Definition 29. Let (A, A) be a measurable space, and let R * (3) if {A n } n ⊂ A is countable, and such that A i ∩ A j = ∅ for i = j, then µ ( n A n ) = n µ(A n ). Furthermore, we call (A, A, µ) a measure space if A is a σ-algebra on A, and µ is a measure on A. Definition 35. Let (A, A, µ) be a measure space. We say that a property holds µ almost everywhere, or µ-ae for short, if there is a set B ∈ A of µ-measure 0 such that the property holds on A \ B. When µ is a (sub-)probability measure, the term "almost surely" is used interchangeably with "almost everywhere". B.2 Preliminaries: Convergence In this section, we recall the definition of pointwise convergence of sequences of functions. Convergence is used to define correctness in Section 5.3 and Section 6.2. For a more comprehensive introduction to convergence, we recommend McDonald and Weiss [28]. Definition 36. Let {x n } n∈N be a sequence of real numbers, and x a real number. We say that lim n→∞ x n = x if for all ε > 0, there exists an N such that |x n −x| < ε for all n > N . Definition 37. Let {f n : A → R} n∈N be a sequence of functions, and f : A → R a function. We say that lim n→∞ f n = f pointwise if for all x ∈ A, it holds that lim n→∞ f n (x) = f (x). In particular, we say that lim n→∞ f n = f µ-ae if the sequence f n converges pointwise to f , except on a set of µ-measure 0. Definition 39. For n ∈ N, we let d R n ((x 1 , x 2 , . . . , x n ), (y 1 , y 2 , . . . , y n )) = |x 1 −y 1 |+|x 2 −y 2 |+· · ·+|x n −y n |, (25) and d R = d R 1 . It is easy to verify that d R n is a metric for each n. Proof. We have to show that S is a σ-algebra: Since B c n ∈ B n [0,1] , the implication holds. Since i B n,i ∈ B n [0,1] , the implication holds. Proof. We begin by showing that µ S is a measure. 2. µ S (∅) = 0. Follows since Next, we need to show that µ S is σ-finite. To do this, we show that there is a sequence {S i } i ⊂ S, µ S (S i ) < ∞ for all i, such that i S i = S. We can choose these S i simply as We now define a metric on S. Definition 45. Let c i and c i denote the ith element of s ∈ S and s ∈ S, respectively. It is easy to verify that S Q is a countable dense subset of S, from which the result follows. Proof. Informally, this follows since S is the union of a countable set of isolated subspaces (the distance from each element in a subset to all elements of other subsets is ∞) which are all isomorphic to R n , for some n ∈ N 0 . More formally, note that S = σ n∈N0 B n [0,1] . Clearly, by definition, Hence, Next, because the distance between traces of different length is ∞, we note that The result follows. Proof. Analogous to the proof for Lemma 29. where k is the maximum of the number of occurrences of x in t 1 and t 2 . Proof. The result follows immediately if d T (t 1 , t 2 ) = ∞. Therefore, assume d T (t 1 , t 2 ) < ∞. We now proceed by induction over the structure of t 1 and t 2 . The result follows immediately. . By using the induction hypothesis, we because the number of occurrences k of x are the same in (λx .t) and t. -Case t 1 = x , t 2 = x . In this case, we have two subcases: either and the result follows immediately (k = 1). In the case x = x , and the result follows immediately (k = 0). -Case t 1 = t 1 t 2 , t 2 = t 1 t 2 . By using the induction hypothesis, we have -The remaining cases follow by largely similar arguments. where × denotes the usual Cartesian product of sets. Lemma 32. Let where n i=1 d i is the Manhattan metric formed from the component metrics d i . be a set of separable metric spaces. Then is a separable metric space, and Lemma 35. Let A ⊂ P(A). Furthermore, let (A , A ), and (A, σ(A)) be measurable spaces. Then f : Proof. The "only if" part is trivial. We now show the "if" part. Consider the set B = {A ∈ P(A) | f −1 (A) ∈ A }. Obviously, A ⊂ B. Furthermore, from properties of the preimage, it is easy to check that B is a σ-algebra. Therefore, σ(A) ⊂ B, and f −1 (A) ∈ A for each A ∈ σ(A). Hence, f is measurable. be a finite set of measurable functions. Then is measurable. Proof. By Lemma 35, it suffices to check that Hence, for all A × = × n i=1 A i , by properties of the preimage and the measurability of the f i , The result follows. B.7 The Big-Step Function Induced by a Small-Step Relation. Assume there is a small-step relation → which can be regarded as a measurable function →: with A ∈ A. We complete this function, forming the function step → : A → A. Because → and id are measurable, we have step −1 → (A) ∈ A, as required. In the following, we use the notation with n ∈ N 0 . Next, assume that we have a measurable function extract : (A, A) → (H, H). We require that H has a bottom element ⊥ (such that {⊥} ∈ H) and that H is equipped with a flat partial order ≤ H (i.e., the smallest partial order with ⊥ ≤ H h for all h ∈ H). Furthermore, we require that extract has the following property with respect to the function step → . Proof. This proof is based on Borgström et al. [5,Lemma 89]. Let f n = extract • step n → . The function f n is clearly measurable, since it is a composition of measurable functions (step n → is measurable as a consequence of Lemma 37). Next, let sup f n = final →,extract , and pick an arbitrary H ∈ H such that ⊥ ∈ H. Then which is measurable by definition. Also, is also measurable by definition. Now assume ⊥ ∈ H. Then which is also measurable by (60) and (61). We summarize all of the above in the following lemma. In this section, we prove that r t and f t are measurable. We follow the proof strategy from Borgström et al. [5]. Condition 2 We require that, for each identifier D ∈ D, the function is measurable. Condition 3 We require that, for each identifier g ∈ G, the function is measurable. Lemma 42. T App , T Prim , T IfTrue , T IfFalse , and T d are T -measurable. Proof. We can write all of these sets as countable unions of sets of the form t n p (R n ). Hence, they must be T -measurable. Definition 55. Proof. By Lemma 14. (73) By applying Lemma 27, Lemma 31, and Lemma 29 (in that order), we have Hence, we see that by selecting δ = ε k+1 , we get the implication (73) and the function is continuous, and hence measurable. For any E [g(c 1 , . . . , c |g| )] ∈ T Prim , by Lemma 29, we have From this, it follows that unbox is continuous (set δ = ε) and hence measurable. Furthermore, implying that box E is continuous (set δ = ε) and measurable as well. Lastly, we have It holds that box E • σ g • unbox is measurable (composition of measurable functions) for each g and E. Because E and G are countable, by Lemma 32, step Prim is measurable. Proof. We show that step IfTrue is continuous as a function between the metric spaces (T IfTrue , d T ) and (T, d T ). By Lemma 44 and Lemma 34, the result then follows. Pick arbitrary E[if true then t 1 else t 2 ] ∈ T IfTrue and ε > 0. Following Definition 51, we want to show that there exists a δ > 0 such that for all E [if true then t 1 else t 2 ] ∈ T IfTrue , We have Hence, we see that by selecting δ = ε, we get the implication (79) and the function is continuous, and hence measurable. Proof. Follows from Lemma 45 and Lemma 32. Let us now make the following definition With T ⊥ = T∪{⊥}, and T ⊥ the corresponding least σ-algebra such that T ⊂ T ⊥ (which must necessarily contain {⊥}), we have the following lemma. Proof. We have extract → Det = id| T c d ∪ ⊥| T d , where ⊥ here denotes the constant function producing ⊥. Because id, ⊥, and T d are measurable, the result follows by Lemma 32. Definition 58. The partial order ≤ d is the least partial order on T ⊥ with ⊥ ≤ d t. Proof. Consider first t ∈ T d . We then have extract → Det (t) = ⊥ by definition, and the result holds immediately. Now consider t ∈ T d . By definition, step → Det (t) = t, and the result holds. Lastly, we apply Lemma 41 to get the measurable function final → Det . Proof. We can write the sets T Sample , T Weight , and T Resample as countable unions of sets of the form t n p (R n ). Hence, they must be T -measurable. T Det is measurable because final → Det is a measurable function, and V, T Sample , T Weight , and T Resample are measurable. Finally, T s is measurable because it is a finite union of measurable sets. The below Lemma allows us to ignore the element ⊥ introduced by the function extract → Det . Proof. The restriction of a measurable function to a measurable set is also a measurable function (follows from Lemma 32). Furthermore, we can restrict the codomain from (T ⊥ , T ⊥ ) to (T, T ) as a result of Lemma 55 and by the definition of (T ⊥ , T ⊥ ). we have Hence, by choosing δ = ε, we see that π j is continuous. Lemma 63. X Det , X Sample , X Weight , X Resample , and X s are all X -measurable. Proof. X Det , X Sample , X Weight , and X Resample are the Cartesian products of measurable sets, hence measurable. X s is a finite union of measurable sets, hence measurable. Lemma 64. X Det , X Sample , X Weight , X Resample , and X s are σ-algebras. Lemma 65. Let d X = d T + d R+ + d S . Then Furthermore, (X, d X ) is a separable metric space. Proof. The projections π 1 ,π 2 , and π 3 are continuous and hence measurable. From Lemma 57, final → Det | T Det is measurable, and therefore, so is the composition final → Det | T Det • π 1 . By Lemma 36, the result now follows. By copying the arguments from the proof of Lemma 48, unbox and box E are measurable. Next, define head (p :: s) = p tail (p :: s) = s. By letting δ = ε, we see that head is continuous and hence measurable. Furthermore, by a similar argument, tail is continuous and measurable. Now, we note that By the measurability of the component functions, Lemma 36 (applied twice), and Lemma 32, the result follows. Proof. Pick arbitrary E ∈ E and define By using similar arguments as in the proof of Lemma 48, it holds that unbox and box are measurable. Now, we note that Here, · denotes the pointwise function product. That is, for two functions f and g, (f ·g)(x) = f (x)·f (g). It is a standard result in measure theory that the function product of two measurable functions is measurable. By the measurability of the component functions, Lemma 36, and Lemma 32, the result now follows.
19,604.2
2020-03-11T00:00:00.000
[ "Computer Science", "Mathematics" ]
Connecting the Higgs Potential and Primordial Black Holes It was recently demonstrated that small small black holes can act as seeds for nucleating decay of the metastable Higgs vacuum, dramatically increasing the tunneling probability. Any primordial black hole lighter than $4.5\times 10^{14}$g at formation would have evaporated by now, and in the absence of new physics beyond the standard model, would therefore have entered the mass range in which seeded decay occurs, however, such true vacuum bubbles must percolate in order to completely destroy the false vacuum; this depends on the bubble number density and the rate of expansion of the universe. Here, we compute the fraction of the universe that has decayed to the true vacuum as a function of the formation temperature (or equivalently, mass) of the primordial black holes, and the spectral index of the fluctuations responsible for their formation. This allows us to constrain the mass spectrum of primordial black holes given a particular Higgs potential and conversely, should we discover primordial black holes of definite mass, we can constrain the Higgs potential parameters. I. INTRODUCTION One of the most fascinating implications of the measurement of the Higgs mass at the LHC [1,2] is that the standard model vacuum appears to be metastable [3][4][5][6][7][8][9][10][11]. Initially, this was not thought to be a problem for our universe, as standard techniques for computing vacuum decay [12][13][14][15] indicated that the half-life was many order of magnitude greater the age of the universe. However, vacuum decay represents a first order phase transition, and in nature these typically proceed via catalysis: a seed or impurity acts as a nucleus for a bubble of the new phase to form. In [16][17][18][19][20], the notion that a black hole could act as such a seed was explored, with the finding that black holes can dramatically shorten the lifetime of a metastable vacuum (see also [21][22][23][24][25][26]). Interestingly, before the discovery of the Higgs particle, the electroweak phase transition was usually described as a second order transition, and in [27] the idea that the usual second order electroweak phase transition might be followed by a first order phase transition was explored. For a black hole to seed vacuum decay, we must be sure that the half-life for decay is less than the evaporation rate of the black hole. This means that the branching ratio of tunnelling to decay must be greater than one. In [18,19] this was found to occur for black holes of order 10 6−9 M p or so, by which point the half-life for decay is of order 10 −23 s! Clearly this process is not relevant for astrophysical black holes, however, it has been hypothesised that there exist very light black holes, formed from extreme density fluctuations in the early universe [28][29][30] dubbed primordial black holes. Such black holes have been proposed as a source for dark matter [31], and although this has now been ruled out [32], they could still constitute a component of the dark matter of the universe. Indeed, it has even been proposed that the Higgs vacuum instability could generate primordial black holes in the early universe [33]. Given that we are in a current metastable Higgs vacuum, we can be sure that there has been no primordial black hole that has evaporated in our past lightcone, however, how strong a constraint on primordial black holes can we place? For the universe to have decayed, the black hole must not only have evaporated sufficiently to reach the mass range in which catalysis spectacularly dominates, but the consequent bubble (or bubbles) of true vacuum must have percolated to engulf the current Hubble volume. Thus, this is a statement about the relative volume in the percolated bubble, which is itself a statement on the primordial black hole density and mass. In this paper, we draw together all these aspects of the problem, linking the primordial black hole spectral index and formation epoch to the standard model parameters. The outline of the paper is as follows. In section II we review the physics of the Higgs vacuum decay in the presence of gravity. In section III we relate the primordial black hole masses that can trigger vacuum decay with the parameters in the effective Higgs potential. In section IV we put this scenario in the cosmological context: Every black hole that can trigger the vacuum decay will create a bubble of true vacuum. These bubbles then expand with the speed of light, but their number density decreases due to the expansion of the universe. For a successful phase transition, the bubbles have to percolate, so we define a quantity P, which represents the portion of the universe that has already transitioned to the new vacuum. For P ≥ 1, the universe would be destroyed, thus the associated range of parameters is excluded. We summarise and discuss our findings in section V. II. FALSE VACUUM DECAY WITH BLACK HOLES The high energy effective Higgs potential has been determined by a two-loop calculation in the standard model as [7] where λ eff (φ) is the effective coupling constant that runs with scale. We now review the calculations in [19], adopting the same conventions. The running of the coupling constant can be excellently modelled over a large range of scales by the three parameter fit: where M −2 p = 8πG. By fitting the two-loop calculation with a simple analytic form, we can easily investigate not only the standard model, but also beyond the standard model potentials, allowing us to explore possible future corrections to the standard model results. The Higgs potential supports a first order phase transition mediated via nucleation of bubbles of new vacuum inside the old, false, vacuum. The nucleation rate in the presence of gravity is determined by a saddle point 'bounce' solution of the Euclidean (signature +, +, +, +) action: The spacetime geometry is taken to have SO(3) × U (1) symmetry, in other words, it is spherically symmetric "around" the black hole, and has time translation symmetry along the Euclidean time direction, τ : with We can think of µ(r) as the local mass parameter, however caution must be used in pushing this analogy. For an asymptotic vacuum of Λ = 0, then µ(∞) truly is the ADM mass of the black hole, however, locally, µ also includes the effect of any vacuum energy: for a pure Schwarzschild-(A)dS solution, µ(r) = M + Λr 3 /6G. Since we are interested in seeding the decay of our current SM vacuum, we will take Λ + = 0, so that the asymptotic value of µ is indeed the seed black hole mass, M + , responsible for triggering the phase transition. The remnant mass, which is a leftover from the seed black hole after some of its energy is invested into the bubble formation, may not be precisely µ(r h ), however, since we will be interested only in the area of the remnant black hole horizon, it turns out that µ(r h ) is in fact the desired quantity. The Higgs and gravitational field equations of motion are where V ,φ ≡ ∂V /∂φ. The black hole horizon is at r = r h , at which f (r h ) = 0. We have to solve these equations of motion numerically in order to get the function φ(r), and to do this, we start from the horizon r h with a particular remnant parameter, r h = 2Gµ − , and some value for the Higgs field φ h . At the horizon therefore the fields satisfy the boundary and as r → ∞, We use a shooting method starting at r h with φ = φ h and integrate out, altering φ h until a solution is obtained with φ tending to 0 for very large values of r. In practise, rather than setting the asymptotic mass µ(∞) = M + , we set the initial (remnant) value of µ − and deduce the seed mass from (8), repeating the integration for a range of values of µ − . We The decay rate of the Higgs vacuum, Γ D , is then determined by computing the difference in entropy between the seed and remnant black holes: where As pointed out in [17][18][19], a black hole can also radiate and lose mass, eventually disappearing in Hawking radiation, at a rate initially estimated by Page [34], see also [35][36][37][38][39]: Thus, we define the branching ratio between the tunneling and evaporation rate as This equation contains all the information we need. In the next two sections, we will study the consequences of the gravitationally induced false Higgs vacuum decay. III. THE VACUUM DECAY RATE AND THE HIGGS EFFECTIVE POTENTIAL If the branching ratio given by Eq. (12) is larger than one, then the tunneling rate is faster than evaporation rate, and the black hole can catalyze false vacuum decay. Note that the branching ratio depends on three parameters: M + , λ * , and b: fitting the form of λ eff in (2) to the standard model value at the electroweak scale fixes c in terms of λ * and b, and M + is the primordial black hole seed. Let us first illustrate the results for some sample choices of the potential parameters. If we set λ * = −0.004, b = 1.5 × 10 −5 , c = 0, then Fig. 1 shows that the branching ratio is larger than one for This means that primordial black holes with masses within this range can initiate Higgs vacuum decay for the associated values of the Higgs potential parameters. A black hole mass with the lifetime of the current age of the universe is approximately 4.5 × 10 14 grams, meaning that all black holes lighter than this value would have already evaporated. Along the way, they will inevitably end up in the range given by Eq. (13). This however does not automatically imply that all the primordial black holes lighter than 4.5 × 10 14 grams are excluded for this choice of parameters. To destroy the universe the bubbles of the true vacuum have to percolate, which takes time. We will study this in the next section. The same Fig. 1 indicates that if we set λ * = −0.00045, and keep b = 1.5 × 10 −5 and c = 0, then the branching ratio is always smaller than one (these values are not consistent with a pure standard model effective coupling, however, indicate the principle of model dependence of the branching ratio). In that case, the primordial black holes of any mass (i.e. M p < M + < ∞) cannot stimulate the false vacuum to decay into true vacuum, and our universe is safe. We excluded the black hole seed masses less than M p from the discussion, as the semi-classical approximation used in computing the decay rate is no longer expected to be valid at the Planck scale, where presumably a full theory of quantum gravity is required. It is now instructive to systematically analyze the range of parameters for the effective coupling (2). Fig. 2 shows the threshold curve Γ D Γ H = 1 in (b, λ * ) parameter space for two values of the parameter c. The region of parameter space with Γ D Γ H < 1, for which the universe is safe, is above the curve. Below the curve, the branching ration will be greater than one for some range of black hole masses (similar to that shown in Eq. (13)) below the quantum gravity scale. This range is different for differing λ * , b, and c (so not easy to plot) however, it can easily calculated by substituting the concrete values for λ * , b, and c, in Eq. (12). The boundary with c = 6.3 × 10 −8 is lower than that with c = 0 because of the contribution from the quartic terms in the Higgs potential. However, numerical experiments indicate that the curves do not change significantly as we vary the parameter c. According to [19] the standard model parameter space corresponding to the allowed range IV. PRIMORDIAL BLACK HOLE MASSES AND PERCOLATING BUBBLES In the previous section, we saw that any primordial black hole that had enough time to evaporate sufficiently to fit into an appropriate mass range for the corresponding choice of the parameters λ * and b, could initiate false vacuum decay. The bubbles of true vacuum then expand with the speed of light, but the background universe expands as well. Successful completion of the first order phase transition depends on the number density of the created bubbles. In our scenario, every black hole that can initiate the false vacuum decay will create a bubble, so the number of the bubbles is equal to the number of such primordial where T is the temperature of radiation. Obviously, the earlier the black holes are formed, the lighter they are, hence their lifetime is shorter. Their lifetime is given as [34][35][36][37][38][39]. Black holes of mass M 4.5 × 10 14 g have a lifetime greater than 1.38 × 10 10 years, or the age of the universe. Therefore only lighter primordial black holes will have the potential to destroy the universe. We focus on these lighter black holes which, according to Eq. (14), are created at temperatures higher than T F 4.7 × 10 8 GeV. After primordial black holes are formed at T F , their number density changes with temperature as where β i is the mass fraction of the universe in black holes at formation, while ρ r,i = π 2 30 g F T 4 F is the radiation energy density at that time, with g F ≈ 100 being the number of degrees of freedom of radiation species at T F . M F is the mass of the primordial black holes at formation, and we take M F = M H (T F ) as usual. The mass fraction β i can be found assuming a Gaussian perturbation spectrum of fluctuations that lead to black hole formation (see e.g. [40,46,47]) The parameter δ min ≈ 0.3 is the minimum density contrast required for black hole creation, while σ H (T ) is the mass variance evaluated at horizon crossing at the temperature T defined as [46] σ H ( Here, T eq ≈ 0.79eV is the temperature at the matter/radiation equilibrium, T 0 = 2.725K = 2.35 × 10 −4 eV is the present temperature of the universe, while n is the spectral index of the fluctuations that lead to black hole formation, i.e. P (k) ∝ k n . Note, the cosmic microwave background data indicate that the value of the spectral index of the inflaton field is n ≈ 1, however the CMB data probe the scales between 10 45 and 10 60 times larger than those probed by primordial black holes. It is expected that primordial black holes are formed by fluctuations of fields other than the inflaton (e.g. during phase transitions), and the typical value of n used in this context is between 1.23 and 1.31 [40,47,48]. To normalize Eq. (18) we use the mass variance evaluated at the horizon crossing σ H (T 0 ) = 9.5 × 10 −5 . We now have all the elements to calculate the black hole abundance for any set of desired parameters. After formation, primordial black holes evaporate, and at some stage of their life they will trigger false vacuum decay. When exactly this will happen depends on the specific parameters of the Higgs potential; we must be above the threshold value of the branching ratio, or in the range of parameters below the curve in Fig. 2, where it is guaranteed that the phase transition will be initiated for some black hole mass range. To illustrate the procedure, we calculate the excluded primordial black hole parameter space for the example from Section II, i.e. for the values of the potential parameters λ * = −0.004, b = 1.5 × 10 −5 , c = 0. As shown in section II, the branching ratio is larger than one for the seed black hole masses M p M + 10 6 M p , therefore all the black holes that have evaporated down to 10 6 M p or less by the present time will trigger false vacuum decay for this set of parameters. We note that this number is effectively the same as the number of the black holes that have evaporated completely by the present time, since it takes only a fraction of the second for a black hole to evaporate from 10 6 M p to zero. The scenario is as follows. Suppose that primordial black holes are formed at a temperature T F with mass M F (lighter than 4.5 × 10 14 g). They then evaporate until they reach a mass of 10 6−9 M p , which is essentially equivalent to a complete evaporation, given the scale of the lifetimes involved. At that moment (which depends on the initial black hole mass) they seed vacuum decay and form a bubble of true vacuum that then expands at the speed of light. For a successful phase transition, the bubbles have to percolate, so we compute the overall volume of true vacuum in the expanding universe from the volume of an individual bubble and the number density of black holes. The present time number density of the bubbles, n b (T 0 ), is shown in Fig. 3. It is calculated from Eq. (16) following the procedure outlined above. The present time radius of the bubble depends on the time it was created. If an object (in this case a bubble of true vacuum) is created at a cosmological redshift Z, its present age, t, is given by where Here Ω m , Ω rad , Ω k and Ω Λ are the present values of the dark matter, radiation, curvature, and dark energy density respectively. We take their numerical values from Planck results [49], Since dr = cdt/a = cdt(1 + Z), the current physical radius of the true vacuum bubble where E is given by Eq. (20). The redshift, Z B , is calculated at the moment when a black hole of a certain seed mass (formed at the temperature T F ) evaporates enough to fit into the appropriate mass window where it can trigger the false vacuum decay. Thus, the portion of the universe which is already in the new vacuum at the present time Fig. 4 shows the boundary of the P =1 region. For the range of parameters in the upper part of the plot the universe today is destroyed, since the bubbles percolate. In contrast, for the range of parameters in the lower part of the plot, the universe is safe, though the primordial black holes may initiate false vacuum decay. With the help of Eq. (14), we can convert the temperature of the universe at the time of the primordial black hole formation to the primordial black hole mass. This is shown in Fig. 5. We can see that lighter black holes are more dangerous than the more massive ones because they evaporate quickly, form the true vacuum bubbles earlier, and the bubbles have V. CONCLUSIONS We demonstrated here that it is possible to connect the parameters of the Higgs potential with the primordial black hole masses and physics of their formation (in our case the spectral index of perturbations that leads to their formation). We used the recent result that corrections due to black hole seeds can significantly increase the tunneling probability from the false to true Higgs vacuum. Any primordial black hole that had enough time to lose its mass from its formation till today to fit into an appropriate mass range for the correspond- However, just triggering the decay is not enough to destroy the universe, and automatically exclude associated black hole range. For a successful completion of the first order phase transition the bubbles have to percolate, which in turn depends on the number density of the created bubbles. Since every black hole that can initiate the false vacuum decay will create a bubble, the number of the bubbles is equal to the number of such primordial black holes. We then trace evolution of the bubbles. The bubbles of the true vacuum expand with the speed of light, but the background universe expands as well, so their number density decreases. We define a quantity P, which represents a portion of the universe which already transitioned to the new vacuum at the current time. For P ≥ 1, the universe is destroyed, and the associated range of parameters is excluded. Our procedure can be used in two ways. If we use the Higgs potential parameters as an input, we can constrain the black hole masses and the physics of formation (e.g. the spectral index of perturbations). In turn, if we ever discover primordial black holes of definite mass, we can use it to constrain the Higgs potential parameters, or indeed the presence of extra dimensions [50,51].
4,940.6
2019-09-02T00:00:00.000
[ "Physics" ]
BTSD: A curated transformation of sentence dataset for text classification in Bangla language The Bangla Transformation of Sentence Classification dataset addresses the resource gap in natural language processing (NLP) for the Bangla language by providing a curated resource for Bangla sentence classification. With 3,793 annotated sentences, the dataset focuses on categorizing Bangla sentences into Simple, Complex, and Compound classes. It serves as a benchmark for evaluating NLP models on Bangla sentence classification, promoting linguistic diversity and inclusive language models. Collected from publicly accessible Facebook pages, the dataset ensures balanced representation across the categories. Preprocessing steps, including anonymization and duplicate removal, were applied. Three native Bangla speakers independently assessed the Transformation of Sentence labels, enhancing the dataset's reliability. The dataset empowers researchers, practitioners, and developers to build accurate and robust NLP models tailored to the Bangla language. It offers insights into Bangla syntax and structure, benefiting linguistic research. The dataset can be used to train models, uncover patterns in Bangla language usage, and develop effective NLP applications across domains. a b s t r a c t The Bangla Transformation of Sentence Classification dataset addresses the resource gap in natural language processing (NLP) for the Bangla language by providing a curated resource for Bangla sentence classification.With 3,793 annotated sentences, the dataset focuses on categorizing Bangla sentences into Simple, Complex, and Compound classes.It serves as a benchmark for evaluating NLP models on Bangla sentence classification, promoting linguistic diversity and inclusive language models.Collected from publicly accessible Facebook pages, the dataset ensures balanced representation across the categories.Preprocessing steps, including anonymization and duplicate removal, were applied.Three native Bangla speakers independently assessed the Transformation of Sentence labels, enhancing the dataset's reliability.The dataset empowers researchers, practitioners, and developers to build accurate and robust NLP models tailored to the Bangla language.It offers insights into Bangla syntax and structure, benefiting linguistic research.The dataset can be used to train models, uncover patterns in Bangla language usage, and develop effective NLP applications across domains. © 2023 The Author(s).Published by Elsevier Inc.This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ ) Value of the Data • The Bangla Transformation of Sentence Classification dataset fills a crucial gap in resources for the Bangla language in the field of natural language processing, specifically for sentence classification.It offers a carefully annotated and categorized dataset containing 3793 Bangla sentences, enabling the development and training of NLP models tailored to the unique characteristics of the Bangla language.• The dataset's diverse representation of sentence types and source domains allows for advancements in understanding Bangla syntax and structure, making it a valuable resource for linguistic research.• Researchers, practitioners, developers, and data scientists in the field of natural language processing can benefit from the Bangla Transformation of Sentence Classification dataset, as it provides valuable resources for building more accurate and robust NLP models tailored to the Bangla language.Linguists and language enthusiasts can leverage this dataset to gain insights into Bangla syntax and structure, promoting a better understanding of the language.• The dataset can be used to train and evaluate NLP models for sentence classification in the Bangla language, leading to the development of more accurate and effective applications.Researchers can analyze the dataset to uncover patterns and trends in Bangla language usage across different domains, such as literature, news articles, and social media. Objective Bangla Transformation of Sentence Classification dataset is to provide a curated resource for NLP researchers and practitioners working on Bangla sentence classification.It aims to facilitate the development of tailored NLP models for the Bangla language by addressing the resource gap [1] .The dataset focuses on classifying Bangla sentences into three categories, promoting linguistic diversity and inclusive language models [2] .It serves as a benchmark for evaluating NLP model performance on Bangla sentence classification, enabling effective approach identification.The ultimate objective is to advance the understanding and processing of Bangla text, leading to more accurate and robust sentence classification models that benefit the Bangla-speaking population [3] . Data Description The cornerstone of our research is the 'Bangla Transformation of Sentence Dataset (BTSD),' a meticulously curated collection of sentences specifically tailored for this study.The dataset, available as the raw data file named "Bangla Transformation of Sentence Dataset(BTSD).xlsx" in the repository, consists of 3793 sentences sourced from publicly accessible Facebook pages.The BTSD dataset has undergone careful curation to ensure its reliability and suitability for our research objectives.One crucial aspect of this curation process was maintaining an equal distribution of sentences across three distinct categories: Simple, Complex, and Compound.This balanced representation facilitates the model's ability to learn and generalize across various linguistic structures and complexities.Fig. 1 illustrates the distribution of sentence categories within the dataset.We acknowledge the significance of the Bengali language in our research context.Bengali belongs to the Indo-Aryan branch of the Indo-European language family, closely related to languages such as Assamese and Odia.It serves as the primary language in Bangladesh and the Indian states of West Bengal, Tripura, and Assam.Bengali is also spoken by diaspora communities worldwide.As the official language of Bangladesh and one of the 22 scheduled languages of India, Bengali boasts a substantial global speaker population, estimated at approximately 228 million [4] .Table 1 provides a detailed description of the variables present in the dataset.Table 2 presents a list of the 20 most frequently occurring words in the dataset, along with their corresponding frequencies.However, it is important to acknowledge the limitations of this list.We did not remove stopwords from the dataset, which can impact the informativeness of the list.Stopwords are commonly used words in the language that do not carry significant meaning and are typically excluded from text analysis.Therefore, the inclusion of stopwords in the list may not provide a comprehensive representation of the most significant terms in the dataset.Nevertheless, analyzing the most common words still provides valuable insights into the common vocabulary present in the text samples.It helps identify significant linguistic fea- Table 1 Dataset columns and its descriptions. Variable name Description Raw Sentence In Bangla Language The string representation of original text in the Bengali language.The original Bangla sentence obtained from Facebook pages.Example: (Birds return home in the evening) (Dusk falls and the birds return home) (When the evening comes, the birds return home) Labels of Transformation Sentence The string representation of labels is assigned to each transformed sentence.The category of the sentence, classified as Simple, Complex, or Compound.Example: tures and patterns within the dataset, guiding the development of language models and algorithms.By focusing on the prevalent words, more accurate predictions and classifications can be achieved.Furthermore, the most common words list assists in data preprocessing tasks such as stop-word removal and feature selection, contributing to the creation of a more refined and effective dataset for training NLP models. Experimental Design, Materials and Methods The dataset creation workflow follows a systematic process.Initially, posts from Facebook were manually extracted, and their content was compiled into an Excel file.Subsequently, the aggregated dataset underwent several preprocessing steps, including anonymization, duplicate removal, and filtering out any instances of profanity language.In the third stage, a meticulous assessment of the dataset's Transformation of Sentence labels was carried out by three native Bangla speakers.Each assessor independently assigned labels based on three distinct polarities: Simple, Complex, and Compound. The categorization of sentences into simple, complex, and compound is a widely recognized classification scheme employed in linguistic analysis to examine sentence structures across different languages, including Bengali.Although these classifications are not exclusive to Bengali linguistics, they serve as fundamental tools in the field of language analysis.To provide a more precise elucidation of these classifications [5] : I. Simple Sentence: A simple sentence comprises a single independent clause that conveys a complete thought or idea.It consists of a subject and a predicate.For instance, the sentence " " (I love Bengali) exemplifies a simple sentence in Bengali.II.Complex Sentence: A complex sentence encompasses an independent clause and one or more dependent clauses.Dependent clauses contribute supplementary information or contextual details to the independent clause.Consider the sentence " , " (When I study Bengali, I feel good).In this sentence, the dependent clause " " (When I study Bengali) provides additional information to the independent clause " " (I feel good).III.Compound Sentence: A compound sentence consists of two or more independent clauses connected by coordinating conjunctions or appropriate punctuation marks.Each independent clause can function independently as a separate sentence.For example, the sentence " , " (I study Bengali, and my friend writes in Bengali) exemplifies a compound sentence.Here, the independent clauses " " (I study Bengali) and " " (My friend writes in Bengali) are connected by the coordinating conjunction " " (and). The data was annotated by skilled native Bangla speakers following a comprehensive protocol: inter-annotator agreement (IAA) measures were employed.A subset of the data was randomly selected and annotated by multiple annotators independently.The annotations were then compared and analyzed for agreement using standard IAA metrics, such as Cohen's kappa coefficient or percentage agreement.The level of agreement between annotators was a crucial factor in ensuring the reliability and validity of the annotated dataset.Table 3 shows the annotation protocol methodological pseudo code. The accuracy of four state-of-the-art neural network-based deep learning models in classifying text data into three classes from our dataset was assessed.All models were trained for 50 epochs, where each epoch represents a complete pass through the entire dataset.The batch size was set to 64, indicating that the model would update its weights after processing 64 samples at a time.A comparative analysis was conducted to evaluate the performance of LSTM, bi-LSTM, Conv1D, and combined Conv1D-LSTM-based models, as outlined in Table 4 .The highest accuracy of 91.17% was achieved by the Conv1D-LSTM Based Model. This thorough assessment ensures the dataset's reliability and accuracy, enhancing its value for research purposes.The dataset presented in this article serves as a foundation for research not only in sentence classification but also opens avenues for exploration in various domains of language processing in the Bangla language.It provides a valuable resource for researchers seeking to delve into broader aspects of Bangla language analysis, contributing to advancements in the field of natural language processing and facilitating a deeper understanding of the intricacies of the Bangla language. Ethics Statements No human or animal studies were conducted in this research.We anonymized all content from social media pages, and no records of personal information were kept.We adhered to Facebook's redistribution policies [6 , 7] , and no permission was required for using content from publicly open Facebook pages. Fig. 1 . Fig. 1.The class distribution of each label.(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)Fig. 2 depicts the distribution of text length within the dataset, specifically categorized into three types: Simple, Complex, and Compound.The graph provides insights into the varying lengths of sentences across these categories, highlighting potential differences in sentence structure and complexity.This information is crucial for developing a comprehensive dataset as it helps in understanding the distribution patterns and ensures a balanced representation of text lengths in the training data.It aids in creating models that can effectively handle sentences of different lengths, enhancing the dataset's usability for various natural language processing tasks.Table2presents a list of the 20 most frequently occurring words in the dataset, along with their corresponding frequencies.However, it is important to acknowledge the limitations of this list.We did not remove stopwords from the dataset, which can impact the informativeness of the list.Stopwords are commonly used words in the language that do not carry significant meaning and are typically excluded from text analysis.Therefore, the inclusion of stopwords in the list may not provide a comprehensive representation of the most significant terms in the dataset.Nevertheless, analyzing the most common words still provides valuable insights into the common vocabulary present in the text samples.It helps identify significant linguistic fea- Fig. 2 . Fig. 2. Distribution of text length (Simple, Complex, Compound).(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) Table 2 Most 20 common words and it's frequency. Table 4 Performance of neural network-based deep learning models on our BTSD dataset.
2,802.8
2023-07-01T00:00:00.000
[ "Computer Science", "Linguistics" ]
Age-dependent changes in the neural correlates of force modulation: An fMRI study Functional imaging studies in humans have demonstrated widespread age-related changes in cortical motor networks. However, the relative contribution of cortical regions during motor performance varies not only with age but with task parameters. In this study, we investigated whether motor system activity during a task involving increasingly forceful hand grips was influenced by age. Forty right-handed volunteers underwent functional magnetic brain imaging whilst performing repetitive isometric hand grips with either hand in separate sessions. We found no age-related changes in the average size and shape of the task-related blood oxygen level dependent (BOLD) signal in contralateral primary motor cortex (M1), but did observe reduced ipsilateral M1 deactivation in older subjects (both hands). Furthermore, task-related activity co-varied positively with force output in a number of brain regions, but was less prominent with advancing age in contralateral M1, cingulate sulcus (both hands), sensory and premotor cortices (right hand). These results indicate that a reduced ability to modulate activity in appropriate motor networks when required may contribute to age-related decline in motor performance. We therefore carried out a functional magnetic resonance imaging (fMRI) experiment to specifically look for agerelated changes in the task-related behaviour of motor and motor-related cortical regions. Subjects were asked to perform a simple isometric hand grip task with parametric modulation of force output. We used a sparse event-related design with target forces set as a proportion of each subjects own maximum grip force in order to ensure that our results would not be affected by differences in task performance or perceived task difficulty. Furthermore, we used random effects model to analyse the data in order to minimise the potential effect of a reduced blood oxygen level dependent (BOLD) signal to noise ratio in older subjects (D'Esposito et al., 1999). Our first aim was to examine for age-related changes in the average magnitude and shape of BOLD responses during hand grip. Secondly, in view of the neurophysiological findings suggesting a reduced ability to engage the corticospinal system by motor commands in older subjects (Pitcher et al., 2003;Sale and Semmler, 2005), we also looked for age-related differences in brain regions involved in the modulation of force production. Previous studies in healthy humans have shown increasing activity in brain regions contributing to corticospinal projections with increasing force production (Dettmers et al., 1995;Thickbroom et al., 1998;Ward and Frackowiak, 2003). We predicted that the forcerelated increase in brain activity would be less prominent in older subjects, indicating a reduced ability to change brain activity in response to changing motor task parameters. Subjects Forty healthy volunteers (age range 21-75 years, mean 48.9 years, S.D. 16.9 years) comprising 22 male subjects and 18 female subjects, participated in the study. All subjects were right handed according to the Edinburgh handedness scale (Oldfield, 1971). They reported no history of neurological illness, psychiatric history, vascular disease or hypertension. Subjects were not taking regular medication. Full written consent was obtained from all subjects in accordance with the Declaration of Helsinki. The study was approved by the Joint Ethics Committee of the Institute of Neurology, UCL and National Hospital for Neurology and Neurosurgery, UCL Hospitals NHS Foundation Trust, London. Behavioural evaluation Maximum grip strength with each hand was measured for each subject using a Jamar hydraulic hand dynamometer (Fabrication Enterprises, Inc., NY, USA). Motor paradigm All subjects underwent two consecutive scanning sessions, one using the dominant right hand, and one using the non-dominant left hand. The order of these sessions was randomised and counterbalanced across subjects. During scanning, all subjects performed a series of dynamic isometric hand grips using a MRI-compatible manipulandum as previously described (Ward and Frackowiak, 2003). Continuous visual feedback about the force exerted was provided. Prior to scanning, but whilst lying in the scanner, subjects were asked to grip the manipulandum using maximum force to generate a maximum voluntary contraction (MVC) for each hand. A single scanning session comprised 30 visually cued hand grips interspersed with 30 null events in a randomised and counterbalanced order (SOA = 5.72 s, scanning time 6 min 14 s). The onset and target force of each single hand grip was visually cued. The target force was varied such that ten grips at each of 15%, 30% and 45% of MVC were performed (30 in total) in a random order. Subjects were instructed to use the visual target as a guide to the level of force production, but were not specifically asked to be accurate in order to avoid prolonged hand grips. All subjects practised the motor task in two 3 min blocks: once outside the scanner and for a second time in the scanner before scanning started. To look for bilateral movements during scanning subjects held identical hand grip manipulanda in both hands while carrying out the task unimanually. Data acquisition A 3T Siemens ALLEGRA system (Siemens, Erlangen, Germany) was used to acquire both T 1 -weighted anatomical images and T 2 * -weighted MRI transverse echoplanar images (EPI) (64 mm × 64 mm, 3 mm × 3 mm pixels, TE = 30 ms) with BOLD contrast. Each echoplanar image comprised forty-eight 2 mm thick contiguous axial slices taken every 3 mm, positioned to cover the whole cerebrum, with an effective repetition time (TR) of 3.12 s per volume. In total, 120 volumes were acquired during each scanning session. The first six volumes were discarded to allow for T 1 equilibration effects. Data preprocessing Imaging data were analysed using Statistical Parametric Mapping (SPM5, Wellcome Department of Imaging Neuroscience, http://www.fil.ion.ucl.ac.uk/spm/) implemented in Matlab 6 (The Mathworks Inc., USA) (Friston et al., 1995b;Worsley and Friston, 1995). All volumes were realigned and slice-time corrected. No subject moved more than 2 mm in any direction, but some of this movement was task-related. In order to remove some of this unwanted movement-related variance without removing variance attributable to the motor task, realigned images were processed using the 'unwarp' toolbox in SPM5 (Andersson et al., 2001) which is predicated on the assumption that susceptibility-by-movement interaction is responsible for a sizeable part of residual movement-related variance. Given the observed variance (after realignment) and the realignment parameters, estimates of how deformations changed with subject movement were made, which were subsequently used to minimise movementrelated variance. The resulting volumes were then normalised to a standard EPI template based on the Montreal Neurological Institute (MNI) reference brain in Talairach space (Talairach and Tournaux, 1998) and resampled to 3 mm × 3 mm × 3 mm voxels. All normalised images were then smoothed with an isotropic 8 mm full-width half-maximum Gaussian kernel to account for intersubject anatomical differences and allow valid statistical inference according to Gaussian random field theory (Friston et al., 1995a). The time series in each voxel were high pass filtered at 1/128 Hz to remove low frequency confounds and scaled to a grand mean of 100 over voxels and scans within each session. Statistical analysis Statistical analysis was performed in two stages. In the first stage, data from the right and left hand of each subject was analysed separately using a single subject single session fixed effects model. All hand grips were defined as a single event type and modelled as delta functions (grip covariate). A second covariate (force covariate) comprised a delta function scaled by the actual peak force exerted for each hand grip. The force covariate was mean corrected and orthogonalised with respect to the grip covariate. Both covariates were convolved with a canonical synthetic haemodynamic response function (HRF) together with its temporal and dispersion derivatives (Friston et al., 1998a) and used in a general linear model (Friston et al., 1995b) together with a single covariate representing the mean (constant) term over scans. The canonical HRF represents a typical BOLD response derived from a principal component analysis of data reported by Friston et al. (1998b). The temporal derivative is approximated by the orthogonalised finite difference between canonical HRF of peak delay of 7 s versus 6 s, whereas the dispersion deriva-tive is approximated by the orthogonalised finite difference between canonical HRF of peak dispersions of 1 versus 1.01 (Friston et al., 1998a). Thus for each subject, voxel-wise parameter estimates for each covariate resulting from the least mean squares fit of the model to the data were calculated. The parameter estimates (or betas) for the grip covariate reflect the magnitude of increase in the BOLD signal during all hand grips compared to rest (B G ). Positive parameter estimates of the temporal derivative (B T ) result when the evoked haemodynamic response peak occurs earlier than the canonical haemodynamic response function. Positive parameter estimates of the dispersion derivative (B D ) result when evoked haemodynamic response has a shorter duration compared to the canonical haemodynamic response function. Parameter estimates for the force covariate (B F ) represent the partial correlation coefficient of BOLD signal plotted against hand grip force, i.e. the degree to which BOLD signal changes linearly with hand grips of different force (Buchel et al., 1998). The statistical parametric maps of the t statistic (SPM{t}) resulting from linear contrasts of each covariate (Friston et al., 1995b) were generated and stored as separate images for each subject. The data for the second stage of analysis comprised the pooled parameter estimates for each covariate across all subjects. Contrast images for each subject were entered into a one sample t-test for each covariate of interest. The SPM{t}s were thresholded at P < 0.05, corrected for multiple comparisons across whole brain. After characterizing the average group effects, we were interested in examining for the influence of age on the parameter estimates B G , B T , B D and B F . Thus, we performed simple linear regression analyses within SPM5, in which the two orthogonal covariates were: (i) contrast images for each subject for the effect of interest (B G , B T , B D or B F ) and (ii) a single value representing age 2 for each subject (mean corrected and normalized across the group). We hypothesized a priori that we would find non-linear changes in activation in keeping with previous behavioural (Smith et al., 1999) and imaging (Ward and Frackowiak, 2003) data and so chose to use age 2 rather than age as the second covariate. SPM{t}s representing brain regions in which there is a linear relationship between the relevant parameter estimates and age 2 were generated. The height threshold was set at P < 0.001, uncorrected, for multiple comparisons across whole brain, and the extent (or cluster) threshold set at P < 0.05, corrected for multiple comparisons across whole brain. For significant voxels the correlation coefficient for the plot of parameter estimate against age 2 was also calculated to illustrate the relationship. All SPM{t}s were transformed to the unit normal Zdistribution to create a statistical parametric map (SPM{Z}). All t-tests carried out within SPM were one tailed. Anatomical identification was carefully performed by superimposing the maxima of activation foci both on the MNI brain and on the normalised structural images of each subject, and labelling with the aid of the atlas of Duvernoy (1991). Behavioural results The mean MVC for each hand was calculated (right hand, mean MVC = 43.5 kg, S.D. = 5.5 kg; left hand, mean MVC = 39.8 kg, S.D. = 4.3 kg). There was no significant correlation between age or age 2 and peak MVC for either hand. There were no significant gender-related differences in MVC and no interaction between age or age 2 and gender. Main effects of hand grip The main effects of hand grip were consistent with previous reports using this paradigm (Ward and Frackowiak, 2003). Activations were seen in a network of regions, which was similar for right and left hands ( Fig. 1 and Supplementary Fig. 1). The most lateralised activations were in contralateral sensorimotor cortex and ipsilateral superior cerebellum. Other activations were bilaterally distributed, including dorsolateral premotor cortex (PMd) and ventrolateral premotor cortex (PMv), supplementary motor area (SMA), cingulate motor areas, inferior parietal cortex and intraparietal sulcus, insula cortex, visual cortices, cerebellar vermis, and both inferior and superior cerebellar hemispheres. Temporal and dispersion derivatives Earlier responses (positive parameter estimates for the temporal derivative) were seen in the pulvinar bilaterally for both the right and left hand task. Later responses (negative parameter estimates for the temporal derivative) were observed in the left cerebellum (crus I) Fig. 1. SPM {Z}s representing the main effect of hand grip as detected using a canonical haemodynamic response function. Results are displayed on 'glass brains' in two columns representing results from using the right and left hands, respectively. The glass brains are shown from the right side (top image), from above (middle image), and from below (bottom image). Voxels are significant at P < 0.001 (corrected) for the purposes of display. Voxels in which some part of the task-related response is accounted for by the temporal or dispersion derivatives of the canonical haemodynamic response function. All voxels are significant at P < 0.05, corrected for multiple comparisons across whole brain. and right parietal cortex for both the right and left hand task. The right parietal clusters were centred on the intraparietal sulcus but extended into both superior and inferior parietal cortex. There were no regions in which the neural response was more dispersed (negative parameter estimates for the dispersion derivative). Less dispersion (positive parameter estimates for the dispersion derivative), i.e. shorter duration of haemodynamic response, was observed in left posterior superior cingulate sulcus for both the right and left hand task (Table 1; Fig. 2) Force-related changes Regions in which the magnitude of task-related signal increased linearly with increasing hand grip force were seen in contralateral anterior M1 (BA 4a) and superior cingulate sulcus, ipsilateral cerebellum (lobule VI), and primary visual cortex for both the right and left hand task (Table 2). At a lower threshold (P < 0.001, uncorrected) the cluster centred on contralateral M1 (BA 4a) was found to extend in the rostral-caudal direction from y = −38 to −7 for the right hand and from y = −34 to −2 for the left hand. In the ventral-dorsal direction, the cluster extended from z = 38 to 74 for the right hand and from z = 32 to 68 for the left hand. Both these clusters additionally encompassed contralateral primary sensory cortex (S1), posterior M1 (BA 4p) and caudal dorsolateral premotor cortex. There was a trend for decreasing magnitude of signal change with increasing grip force with the right hand in left (contralateral) ventrolateral premotor cortex (x = −56, y = 14, z = 36, Z-score = 4.19). Age-related changes in the main effects of hand grip There were no negative correlations between task-related changes in signal and increasing age 2 , but positive correlations (i.e. greater task-related signal change with increasing age 2 ) were observed in a number of brain regions, more so for the left hand task than right hand task (Table 3). There were no age-related changes in contralateral M1, but a positive correlation between task-related signal and increasing age 2 was found in ipsilateral M1 for both the right and left hand task. In younger subjects, task-related signal change in ipsilateral M1 was reduced compared to rest as previously described (Newton et al., 2005), but this reduction was less marked in older subjects (Fig. 3). In some older subjects, B G (the parameter estimate representing the average signal change for all hand grips compared to rest) was positive, indicating greater activity during hand grip compared to rest. Age-related increases in task-related signal were also seen in the putamen bilaterally for both right and left hand task. Further age-related increases were noted in dorsolateral premotor cortex bilaterally and in left (ipsilateral) intraparietal sulcus, though only for the left hand task. Age-related changes in the temporal and dispersion derivatives No brain regions exhibited earlier or later haemodynamic responses as a function of age 2 with either hand. Task-related responses were more dispersed (i.e. longer) in older subjects in bilateral intraparietal sulcus and bilateral cerebellum (lobule VI) during the left hand task (Table 4). Right hand task responses were more dispersed in older subjects in right intraparietal sulcus only. Age-related changes in response to force modulation The parameter estimate for the force covariate (B F ) represents the partial correlation coefficient of BOLD signal plotted against hand grip force for each subject. When the right hand was used, a negative correlation between B F and age 2 was seen in contralateral primary sensory cortex, primary motor cortex, dorsolateral premotor cortex and anterior cingulate sulcus. In other words, the degree to which BOLD signal consistently increases with hand grips of increasing force diminished with increasing age in these brain regions. When the left hand was used, the negative correlation was observed only in contralateral primary motor cortex and posterior cingulate sulcus (Fig. 4). No positive correlations between B F and age 2 were observed at the statistical threshold used. However, we have Voxels in which hand grip-related signal change increases linearly with increasing hand grip force. All voxels are significant at P < 0.05, corrected for multiple comparisons across whole brain. Voxels in which there is a correlation between the magnitude of activation during hand grip (B G ) and age 2 . All peak voxels are significant at a height threshold of P < 0.001, uncorrected, and extent (cluster) threshold of P < 0.05, corrected for multiple comparisons across whole brain. Z-values marked '*' indicate that the voxel is significant at a height threshold of P < 0.05, corrected for multiple comparisons across whole brain. Table 3. previously reported that brain activity in ventral premotor cortex/BA44 bilaterally co-varies positively with peak grip force in older subjects more than younger subjects (Ward and Frackowiak, 2003), and also in stroke patients with greater damage to corticospinal tract (Ward et al., 2007). Furthermore, in the current study we observed that with increasing age there was an overall trend towards greater brain activity with increasing peak grip force (right hand) in left ven- Voxels in which there is a correlation between the parameter estimate for the temporal (B T ) or dispersion (B D ) derivative of the haemodynamic response function during hand grip and age 2 . All peak voxels are significant at a height threshold of P < 0.001, uncorrected, and extent (cluster) threshold of P < 0.05, corrected for multiple comparisons across whole brain. Voxels marked '*' are significant at a height threshold of P < 0.05, corrected for multiple comparisons across whole brain. Cerebellar localization performed from Schmahmann et al. (1999). Parameter estimates for the force covariate, B F (which represents the partial correlation coefficient between BOLD signal and hand grip force), are plotted against age 2 for (A) contralateral primary motor cortex, primary sensory cortex and dorsolateral premotor cortex; (B) contralateral cingulate sulcus; (C) contralateral primary motor cortex; (D) contralateral cingulate sulcus. The significant clusters (P < 0.05, corrected) are overlaid onto the mean normalised T 1 -weighted structural image obtained from all subjects. Peak co-ordinates, Z-scores and correlation coefficients are given in Table 5. trolateral premotor cortex. We were therefore interested to examine post hoc for age-related changes in the relationship between ventrolateral premotor cortex activity and peak grip force. Regions of interest were created as follows. We created a region of interest using a 20 mm diameter sphere centred on the coordinates x = 48, y = 14, z = 16, and x = −42, y = 20, z = 18, derived from Ward and Frackowiak (2003). We then looked for voxels within each region of interest which exhibited age-related changes in B F at a threshold of P < 0.001, uncorrected. With this approach, positive cor- Voxels in which there is a correlation between the parameter estimate representing linear changes in BOLD signal with grip force (B F ) during right hand grip and age 2 . All peak voxels are significant at a height threshold of P < 0.001, uncorrected, and extent (cluster) threshold of P < 0.05, corrected for multiple comparisons across whole brain. Voxels marked '*' are significant at a height threshold of P < 0.05, corrected for multiple comparisons across whole brain. relations between B F and age 2 were seen in ventrolateral premotor cortex/BA44 bilaterally for both right hand use (x = 50, y = 10, z = 18, Z-score = 3.11, r 2 = 0.27 and x = −50, y = 20, z = 24, Z-score = 3.04, r 2 = 0.29) and left hand use (x = 46, y = 22, z = 24, Z-score = 3.18, r 2 = 0.26 and x = −46, y = 22, z = 34, Z-score = 3.37, r 2 = 0.30) (Fig. 5). These results are presented as subthreshold trends and so are not reported in Table 5. None of the above results were influenced by adding gender as an additional covariate in keeping with a previous absence of gender effect using the same hand grip paradigm (Ward and Frackowiak, 2003). Discussion We have used fMRI to study age-related changes in the extent and timing of motor system activation during a simple visuomotor task performed with each hand in a sparse event-related design. We have shown that the canonical HRF captures all the experimentally induced signal change in cortical regions involved in motor output and that the shape of the HRF in these regions varies little in relation to age. We have confirmed that there are increases but no decreases in motor system activation with increasing age, a process which is likely to accelerate as aging progresses. Lastly, by asking subjects to vary levels of force production we have been able to show for the first time that the motor cortices of older subjects are less able to increase activity when increasing force output is required. There is some evidence that increasing activity with greater force production is more prominent in the ventral premotor cortices of older compared to younger subjects. Main effects of hand grip The visuomotor task employed in this experiment activates a widely distributed brain network as previously reported (Ward and Frackowiak, 2003). The use of both temporal and dispersion derivatives of the canonical HRF has enabled the identification of regional variations in haemodynamic response during this task. Earlier responses were seen in the pulvinar bilaterally. The onset of our canonical HRF was taken as the beginning of hand grip rather than the prior visual cue, and thus earlier responses in the pulvinar are to be expected and are in keeping with its role in early selective visual attention (Robinson and McClurkin, 1989). Strong bilateral intraparietal sulcus activations were captured by the canonical HRF, but delayed responses captured by the temporal derivative were seen in right parietal structures centred upon the intraparietal sulcus and left cerebellum (crus I). These responses were lateralised in the brain independently of the hand used. The left parietal cortex is most often associated with attention towards a motor act (Rushworth et al., 2003(Rushworth et al., , 2001, but increased activation in right parietal structures has been associated with the cessation of a motor act (Maguire et al., 2003;Rubia et al., 2001). Thus, sustained response in right parietal cortex might reflect cessation of hand grip, or possibly continued attention to visual feedback of force production after grip cessation. A similar delay in left cerebellar (crus I) activity irrespective of which hand was used may reflect parieto-cerebellar connections important in visually guided motor tasks (Brodal and Steen, 1983;Glickstein, 2003). Thus in this event-related study the synthetic canonical HRF, with onset at the beginning of hand grip, was able to account for almost all of the experimentally induced variance (Fig. 1). Variations in HRF were independent of the hand used, and so it is unlikely that these changes were directly related to the generation of lateralised motor output, but rather some cognitive aspect of the task common to both. Age-related changes A reduced BOLD signal to noise ratio (SNR) in M1 has been reported in older compared to younger subjects (D'Esposito et al., 1999). Differential SNR across ages is a potential problem when looking for age-related changes if the error variance is dominated by within subject variability, as in a fixed effects analysis. In this case the lower SNR will result in fewer suprathreshold voxels even though the magnitude of activation is no different (D'Esposito et al., 1999). The problem can be overcome by employing a random effects analysis where the error variance is dominated by between subjects variability, as in the current study. Thus, any agerelated changes in magnitude of activation in this study are unlikely to be related to differential SNR. We were particularly interested in age-related changes in M1, as previous studies have provided conflicting results (see Ward, 2006 for review). With regard to contralateral M1, there have been reports of decreases in extent (D'Esposito et al., 1999) or magnitude (Hesselmann et al., 2001;Hutchinson et al., 2002;Riecker et al., 2006;Tekes et al., 2005;Wu and Hallett, 2005) of activation in older subjects, whereas others have reported no change (Calautti et al., 2001;Heuninckx et al., 2005;Ward and Frackowiak, 2003). Mattay et al. (2002) found increased contralateral M1 activity in older subjects, but only in those with a similar level of motor performance to the young subjects. The differences in results are likely to be due to the use of different paradigms ranging from simple to more complex motor tasks, and different methods of analysis which are more or less susceptible to the effects of reduced SNR in older subjects (D'Esposito et al., 1999). In our eventrelated study, we found no age-related changes in the average magnitude or shape of the BOLD response in contralateral M1, in keeping with the findings of D' Esposito et al. (1999). More consistent changes have been found in ipsilateral M1. During motor tasks, there is a reduction in M1 BOLD signal ipsilateral to the moving hand (Allison et al., 2000;Newton et al., 2005), but in older subjects, this deactivation appears reduced (Hutchinson et al., 2002;Naccarato et al., 2006;Riecker et al., 2006;Ward and Frackowiak, 2003). In a previous study employing hand grip, we found decreasing ipsilateral M1 deactivation with increasing age 2 (Ward and Frackowiak, 2003) for both left and right hand use. We have replicated this result, which supports the recent finding of a shift in laterality towards ipsilateral M1 with increasing age during a finger opposition task (Naccarato et al., 2006). The mechanism of deactivation in ipsilateral M1 during the performance of a motor task is thought to be via transcallosal inhibition. Older subjects appear to have reduced excitability of intracortical inhibitory circuits in motor cortex as assessed with short interval paired pulse TMS (Kossev et al., 2002;Peinemann et al., 2001) and the EMG silent period (Eisen et al., 1996;Prout and Eisen, 1994;Sale and Semmler, 2005). It is therefore possible that aging also leads to impaired transcallosal inhibition of ipsilateral M1. However, it is interesting to speculate that our finding of age-related increases in task-related signal in the putamen bilaterally suggests that changes cortico-subcortical connections may also play a role in this shift in hemispheric balance, as M1 and putamen are intimately connected (Kelly and Strick, 2004). Mirror move-ments are often considered to confound the interpretation of ipsilateral M1 activation. We did not observe mirror movements during task practice outside the scanner in any subject, and neither did we detect mirror gripping during scanning with our force transducers. However, we cannot rule out a very small level of mirror activity not picked up by our methods, which might be detectable only by careful EMG. However, we would suggest that any mirror EMG activity is likely to be the product of a less inhibited ipsilateral M1 rather than confounding voluntary activity. As well as changes in M1 recruitment, age-related changes have also been reported in non-M1 regions during the performance of a variety of motor tasks (Heuninckx et al., 2005;Hutchinson et al., 2002;Mattay et al., 2002;Rowe et al., 2006;Ward and Frackowiak, 2003;Wu and Hallett, 2005). Increased recruitment in non-motor brain regions with age is more prominent in complex motor tasks (Heuninckx et al., 2005) suggesting increased cognitive monitoring of performance. In the current study, increasing task-related activity in bilateral dorsolateral premotor cortex, left intraparietal cortex, and right cerebellum (lobule VI) were observed with increasing age with left but not right hand use. This suggests that increased monitoring or attention to the performance of the non-dominant left hand is required in the older subjects. The inclusion of temporal and dispersion derivatives allowed us to look for systematic age-related changes in the shape or timing of the haemodynamic response. Aging can affect neurovascular coupling and therefore the form of the haemodynamic response. Taoka et al. (1998) reported a slower task-related rise in M1 BOLD signal in older subjects, whereas others have reported no difference in the shape of the M1 haemodynamic response (D'Esposito et al., 1999). Most subsequent studies examining age-related effects have employed blocked designs which, although efficient, preclude examination of haemodynamic response variations. Using a sparse event-related design, we found no age-related changes in the shape of haemodynamic response in M1 for this task. However, we did find that increasing age is associated with prolonged haemodynamic responses in bilateral intraparietal sulcus and cerebellum when the left hand is used. Such changes were limited to right intraparietal sulcus for right hand use. Although this could be the result of an altered neurovascular response in these brain regions, the result would also be in keeping with increased and/or prolonged attention to the motor task and monitoring for errors in older subjects, especially when the less automatic nondominant left hand is used. If regional changes in neurovascular coupling were occurring as a function of age, then a similar relationship between age and the temporal and dispersion derivatives should have been seen for both right and left hand scanning sessions, since much of the activation outside M1 was bilateral. This was not the case, and it is therefore highly likely that the regional alterations in haemodynamic shape and timing are attributable to differences in cognitive approaches to the tasks. This is an important finding which suggests that fMRI is an appropri-ate tool for studying age-related changes in motor-related cortical regions. Force-related change Parametric modulation of target forces allows us to examine how the cortical motor system responds when increasing force output is required. For the group as a whole we observed a positive correlation between BOLD signal and grip force in contralateral M1 and superior cingulate sulcus, ipsilateral cerebellum, and primary visual cortex (in response to greater visual stimulation/feedback with increasing force), replicating the findings of previous studies (Dettmers et al., 1995;Thickbroom et al., 1998;Ward and Frackowiak, 2003). In contralateral M1 and cingulate sulcus however, the correlation diminished with advancing age for both right and left hand tasks. A similar decline was seen in contralateral primary sensory cortex and PMd for the right hand. Thus, our fMRI data suggests that when increasing force output is required within the range of 15-45% of maximum hand grip, then cortical regions known to contribute to the corticospinal tract (Dum and Strick, 1991) are less able to increase output-related activity with advancing age. This novel finding is in keeping with results from TMS experiments which demonstrate that older subjects generally have lower MEP amplitudes in response to submaximal TMS (Eisen et al., 1991), whereas maximal MEP amplitudes are similar in all age groups (Pitcher et al., 2003). This age-related change in stimulus-output characteristics suggests that the number of large diameter corticospinal fibres does not decline appreciably with advancing age, but the ability to activate these fibres with TMS is reduced (Pitcher et al., 2003;Sale and Semmler, 2005). Furthermore, accelerated decline of grey matter volume with advancing age has been reported in central and cingulate sulci compared to other regions (Good et al., 2001). It is interesting to speculate that both the TMS and fMRI results reflect the functional consequences of these regionally specific effects. An alternative explanation for our results might arise from the finding that in older subjects there is an increased variability of motor unit discharge in response to increasing force output (Sosnoff et al., 2004;Tracy et al., 2005;Vaillancourt et al., 2003). If the relationship between brain and muscle activity is altered in older subjects, then increases in force production might be less reliably reflected in changing BOLD signal. Our current results cannot distinguish between these possibilities. Despite these findings, all subjects were able to modulate their grip force. We found a weakly positive correlation between B F and age 2 in PMv bilaterally, replicating a previous finding (Ward and Frackowiak, 2003). In other words, a positive correlation between BOLD signal in PMv and force output was more likely in older subjects. A small proportion of PMv cells may either increase or decrease neuronal firing rates with increasing precision grip force (Hepp-Reymond et al., 1994), but our results demonstrate that a consistently positive correlation between force output and BOLD signal was more likely to be seen in PMv rather than M1 with advancing age. Premotor regions, particularly PMv, are more active during precision compared to power grip (Ehrsson et al., 2000(Ehrsson et al., , 2001. It is possible that older subjects were performing the force modulation task more like a precision grip task thus accounting for the trend towards increased modulatory behaviour in PMv. Nevertheless, it still suggests that PMv becomes increasingly functionally useful with increasing grip force with increasing age. In the older brain, existing inputs to M1 may be insufficient to increase output to spinal cord motor neurons when higher grip forces are required. In normal primates rostral PMv (area F5) is able to facilitate motor cortex output to upper limb motor neurons (Cerri et al., 2003;Shimazu et al., 2004). Thus, additional PMv input to M1 could exert a modulatory effect by increasing the gain of M1 output. The rate of age-related change in the cortical motor system Most previous studies have made categorical comparisons between old and young subjects. Naccarato et al. (2006) recently looked for a correlation between age and a measure of the shift in laterality from contralateral to ipsilateral M1 during a simple motor task. We have previously also used a correlational approach rather than categorical comparison to look for age-related changes in motor system activation (Ward and Frackowiak, 2003) although, based on the behavioural observations that decline in motor function is non-linear and accelerates beyond the age of 60 years (Smith et al., 1999), we used age 2 rather than age as a covariate. We have used the same approach in the current study. Correlational approaches avoid making assumptions about what constitutes 'old' or 'young', and also acknowledges that the processes under investigation may be continuous throughout adult life In summary, we have demonstrated that the configuration of the cortical motor system during a simple hand grip task changes with advancing age. Furthermore, it is clear that the way in which the motor system responds to the demands of increasing force production also changes with age. The reduced ability to modulate activity in appropriate motor networks when required may be a contributory factor in the decline of motor performance in older subjects. Furthermore, because of our experimental design, these results are likely to reflect structural and neurophysiological age-related changes in motor-related brain regions rather than purely changes in cognitive strategy. Conflict of interest All authors confirm that they have no financial, actual or potential, conflicts of interest that could inappropriately influence or bias this work.
8,275.4
2008-09-01T00:00:00.000
[ "Biology" ]
Laboratory modelling of the wind-wave interaction with modified PIV-method Laboratory experiments on studying the structure of the turbulent air boundary layer over waves were carried out at the Wind-Wave Flume of the Large Thermostratified Tank of the Institute of Applied Physics, Russian Academy of Sciences (IAP RAS), in conditions modeling the near water boundary layer of the atmosphere under strong and hurricane winds and the equivalent wind velocities from 10 to 48 m/s at the standard height of 10 m. A modified technique of Particle Image Velocimetry (PIV) was used to obtain turbulent pulsation averaged velocity fields of the air flow over the water surface curved by a wave and average profiles of the wind velocity. The main modifications are: 1) the use of high-speed video recording (1000-10000 frames/sec) with continuous laser illumination helps to obtain ensemble of the velocity fields in all phases of the wavy surface for subsequent statistical processing; 2) the development and application of special algorithms for obtaining form of the curvilinear wavy surface of the images for the conditions of parasitic images of the particles and the droplets in the air side close to the surface; 3) adaptive cross-correlation image processing to finding the velocity fields on a curved grid, caused by wave boarder; 4) using Hilbert transform to detect the phase of the wave in which the measured velocity field for subsequent appropriate binning within procedure obtaining the average characteristics. Introduction The interaction between wind flow and surface waves is one of central problems of the study and parameterization of exchange processes in boundary layers of the atmosphere and ocean [1]. Of special interest is the case of steep and breaking waves forming at a strong wind. But carrying out field measurements of air flows, wave parameters for such conditions is very difficult problem (see. [2]). That is why laboratory modeling is used [3][4][5] to simulate conditions up to severe weather. The main difficulties in the experimental study of a turbulent air flow over a wavy water surface in laboratory conditions are connected with measuring wind characteristics near the water surface, especially in wave troughs, where one can expect the appearance of the most interesting features of this flow, such as shielding and separation of the flow. The technique Particle Image Velocimetry PIV is best fit for measuring the air flow in wave troughs and water flows close to the surface. In [6][7][8], the experience of using the PIV technique for measuring the air flow velocity over a wavy surface was presented. In [8], the structure of average velocity fields in the air flow and their perturbations initiated by waves, as well as the structure of turbulent stresses, were successfully studied. However, those measurements were carried out at low wind velocities. The subject of this work is studying the characteristics of air flows, waves, under conditions of breaking of waves with the formation of sprays near a wavy surface, in particular, in wave troughs due to high wind speeds. Earlier, measurements under conditions of strong winds with an equivalent wind velocity U 10 > 25 m/s in the laboratory modeling of extreme meteorological conditions were carried out only using contact gages (Pitot tubes and hotwires/hotfilms) at a consider able distance from surface wave crests [3][4][5]9]. The measurement region was positioned above the layer of constant fluxes, where the velocity profile is logarithmic and its parameters (the friction velocity and roughness height z 0 ) could be determined directly using the profile method. Therefore, and z 0 have to be determined using the self-similarity property of the velocity profile of the flow in channels, as it was done in [4]. However, to refine the self-similar form of the velocity profile, it is necessary to measure the flow velocity as close to the water surface as possible [10]. The measurement can be implemented only using noncontact methods basing on visualisation, e.g., PIV. This work presents measurement results for the air flow velocity over waves in laboratory conditions modeling wide range of the winds (up to strong and hurricane conditions) with use of a specially modified PIV technique. Experimental setup and measuring technique Experiments were carried out on the Wind-Wave Flume of the Large Thermostratified Tank of IAP RAS (overview on Fig. 1 a). The length of the air flow channel with a cross section of 0.4 × 0.4 m is 10 m over the water surface. A detailed description of this setup and principles of design and control for the air flow in it were presented in [4,5]. The general scheme of the experiments is presented in Fig. 1 b. In addition to PIV methods, which were the main tool in the presented investigations, earlier approbated contact ways of measurements were used. In the working section of the channel, at a distance of 7 m from the entrance, average profiles of the air flow velocity were measured using a Pitot tube. The air flow in the channel was visualized using polyamide particles with an average diameter of 10 ȝm and density of 1.02 g/m3. The inertial time was 2 × 10 -4 s. The seeding device was similar to that used in [8] and positioned at the entrance of the channel at a distance of 6 m from the detection area. The test experiments demonstrated that the system brings no distortions to the wind flow in the region of measurements. To apply the PIV method (principal scheme shown on the Fig. 1 c), the motion of particles in the air in the region of measurements was illuminated by a laser sheet along the channel axis 8 m from the inlet. The laser sheet is formed by a cylindrical lens from a vertical laser beam (a continuous Nd YAG laser, 532 nm, 4 W). The width of the illuminated area was varied choosing the radius of the cylindrical lenses and their mutual arrangement. The motion of particles introduced into the air flow and water surface which were illuminated by the laser sheet was taken from the side by VideoSprint high speed camera positioned horizontally in a sealed box (see Fig. 1b). The camera was positioned horizontally and the level of its optic axis was above the level of the water surface by 8 cm. The focal plane was at a distance of 77 cm from the laser knife. The size of the taken area was from (66.4 -16 mm) × 256 mm (the horizontal size of a frame decrease with an increase in filming speed). A droplet removal system in the form of a metal tube blown with supplied compressed air was mounted from the internal side of the channel side wall through which the filming was performed. According to test experiments, the system introduced no distortions to the wind flow in the region of measurements. The experiments were carried out at four values of air flow rates in the channel: 1.1, 1.6, 2.2, and 2.7 m 3 /s; as it is shown below, this corresponds to equivalent wind velocities at heights of 10 m, U 10 , 11, 20, 37, and 48 m/s, respectively. In two last cases, a strong breaking of waves with the formation of white caps and intense generation of sprays was observed (see photographs in Fig. 2). For each wind velocity, 30 implementations were taken, 3000-4500 frames in each implementation. The filming speed was 1500, 3000, 5000, and 6000 fps; the exposure time was from 50 to 14 ȝs. Fig. 1. a) Wind-Wave Flume of the Large Thermostratified Tank of IAP RAS (air channel in the center) b) principal scheme of experimental setup in the working cross-section of the wind channel 1-laser, 2 -droplets blow-off system, 3high-speed camera, 4 -wind-wave channel. c) scheme of PIVmethod using. Experimental data processing Determining the shape of wave surface for each frame is necessary for finding the velocity field by crosscorrelation processing with the adaptive PIV algorithm over a curvilinear grid in the immediate vicinity from the water surface. Earlier, to determine the surface shape by images from a highspeed camera, a stepwise algorithm [11] was developed based on the Canny method [12]. That method operated well under conditions of weakly breaking waves. However, with an increase in the air flow velocity, a transition to a rather intense breaking with the formation of sprays and whitecaps was observed. In this order, a combined method of measuring the elevation of the water surface was used. In this method, optical measurements were completed by contact measurements using a wire wave recorder mounted on the channel axis near the edge of the laser sheet. The records of the level elevation and high speed camera were synchronized. The final form is a combination of data obtained by contact and noncontact ways; with an increase in wind velocity, the role of contact measurements increased up to a full substitution of optical measurements for the case of flow rate of 2.7 m 3 /s (see Fig. 3). After finding the surface shape, velocity fields were calculated by the cross-correlation method over a curvilinear grid taking into account the current shape of the surface [8]. We used a modified PIV processing method based on adaptive finding for the shift of the cross-correlation function (from now on, briefly CCF) peak for rectangular elements of the image for two sequential frames. The space on the grid of the velocity field calculation was 3.2 mm. To increase the algorithm accuracy due to a decrease in the search window size, the processing was performed in two stages. At the first stage, using a window with dimensions of 128 × 64 px, the profile of the average horizontal shift of particles for each wind velocity was found. At the second stage, the comparison window was shifted to the magnitude of the shift found at the first stage. The size of the search window could be less than the final shift of particles, which increased the spatial resolution of the method, accelerated the processing, and made it possible to obtain more data. The CCF maximum was searched by the adaptive method during two iterations by analogy with [8]. At the first passage, the shift over the window with a larger size (32 × 32 px or 6.4 × 6.4 mm) was approximately determined; then, an after search was performed with allowance for the calculated shift for a window with a lesser size (8 × 8 px or 1.6 × 1.6 mm). To increase the accuracy at the last step of cross correlation, an algorithm of subpixel approximation of the peak by a Gaussian twodimensional profile (see [8]) was applied; this allows one to take into account CCF values at points near the maximum. Data that do not satisfy the quality criteria were eliminated from further processing. In particular, windows with an insufficient quantity of particles were eliminated from the processing. For each image, points corresponding to high values of the brightness gradient were found using the Sobel operator. These points correspond to edges of particle images in the frames. For each window, the number of similar boundary pixels was calculated. The threshold value of the brightness gradient in the process of searching the edges was matched so that no boundary pixels fell in regions without particles. The cross correlation was performed only with windows in which at least one boundary pixel was present. In this process, if the number of boundary pixels considerably exceeded the average number, flares, spindrifts, or a foam line were most often present in part of the image corresponding to this region; as a consequence, the shift calculated for it was not taken into account in the process of further averaging. Regions for which the value of the CCF maximum considerably differs from the typical value in the implementation were also not taken into account, because such regions most often corresponded to the case of a particle escaping from the laser sheet during the interval between frames. At strong winds, the inhomogeneity of seeding increased, which resulted in considerable discontinuities (the absence of data) in time dependences of velocities at fixed horizontal coordinates. Despite measures for the removal of droplets, optical distortions increased with an increase in the wind velocity. This resulted in decreasing of accuracy. To correctly calculate average velocity fields and profiles, a method based on phasing the measured wind velocities at different levels from a wavy surface was used. The phase of the wave over which the point of velocity measurement is positioned was determined using the Hilbert transform over time realizations of water surface elevations presented in several equidistant vertical cross sections for each frame. There were from three to seven such cross sections in a frame (depending on the flow rate).The position of the surface for each vertical column of the velocity field was calculated using linear interpolation. Thus, each column has its own dependence of the surface position on time. Wave number spectra of waving have a pronounced peak, and deviation of the water surface from the horizontal is determined mainly by the fundamental wave harmonic. For this reason, averaging over the phase of the fundamental wave harmonic was performed. Frequency filtering of time realizations obtained in the aforementioned way was performed over a band with a width of 2 Hz near the peak frequency determined for each wind velocity by time realizations from the wave recorder. Such filtered realizations with a removed constant component fit well for determining the wave phase by use of the Hilbert transform. Using the economical algorithm of the fast Fourier transform, the Hilbert transform permits one to determine the amplitude and phase of surface elevation as a function of time. To obtain velocity fields averaged over turbulent fluctuations, conventional averaging at a fixed phase was performed. To reduce errors connected with the insufficient number of measurements, the data were binned in phase intervals of 18°, which yields 20 different binns of the phase. The averaging was performed in two ways: (i) Averaging over fixed horizons (Fig. 4a). The velocity fields were divided into horizons with a step of 32 px (6.4 mm of the real height). For each horizon and each phase, velocities were accumulated from the whole realization and then averaged. (ii) Averaging over a fixed distance from the surface (Fig. 4b). For each frame in each column of the grid, a cell with the same number was chosen; in the curvilinear coordinate system used, this meant a fixed distance of the point from the surface. The data from corresponding similar cells over the height and phase were accumulated from the whole realization and then averaged. Results Pictures of turbulentͲpulsation averaged velocity fields of the air flow were obtained for both ways of averaging. In curvilinear coordinates, by analogy with [8], the vertical coordinate is counted from the position of the water surface at each instant of time. The position of the average surface level was taken as zero in rectangular coordinates. The horizontal coordinate is the wave phase for a given point ij recalculated using the value of the wavelength Ȝ determined by the dispersion relation for waves on deep water for a frequency corresponding to the peak in the center of the spectrum for each value of the wind velocity: 2 x M O S Examples of flow fields for both averaging are presented in Fig. 5. The number of points satisfying the criteria of data quality is different for the same distance from the surface in different phases. This difference is especially seen near the surface, where the decrease in the quantity of data satisfying the quality criterion from the leeward side of the wave crest (negative values of the phase in Fig. 6) caused by decrease in the number of tracing particles in this region at the instant of shooting due to the shielding of the wind flow by the wave crest. Note that, for the cases of two high wind velocities (U 10 of 37 and 48 m/s), an inverse picture is observed: the number of points satisfying the criteria of data quality becomes larger from the leeward side of the crest. This can be connected with the appearance of sprays, which are intensely generated at velocities U 10 > 25 m/s. The droplets concentration from the windward side of the wave crest is considerably higher than from the leeward side, because the droplets are generated mainly near the top of the wave and whirled away by the wind. The crossͲ correlation algorithm for cells in this region shows a shift of droplets, not tracing particles, which are relatively not numerous there. Since the droplet velocity is lower than the wind velocity, this leads to an erroneous underestimation of the air flow velocity near the surface. The different number of particles in different phases makes it necessary to apply conventional averaging over the phase to obtain correct results. Velocity values obtained using the Pitot tube always turned to be lower than those obtained in PIV measurements. Note that the PIV technique is a direct method of determining the air flow velocity; in contrast to it, the Pitot tube is an indirect method of estimating the flow velocity by pressure and the underestimation of the flow velocity by the Pitot tube probably indicates the presence of a systematic error. Both the velocity profiles have a similar shape and close values; they agree with each other well in the region above wave crests. The main difference between them is that averaging over curvilinear coordinates can yield air flow velocities in wave troughs, while averaging over horizons can yield the vertical profile of the average wind velocity only above wave crests. In connection with this, the aerodynamic drag factor of the water surface was determined by dependences obtained by averaging over curvilinear coordinates. Conclusion Laboratory experiments on studying the structure of the turbulent air boundary layer over waves were carried out at the WindͲWave Channel, IAP RAS, under conditions modeling the near water boundary layer of the atmosphere under strong and hurricane winds. Air flow in the channel with a square cross section of 0.16 m2 was created by fan. The air flow rate took values of 1.1, 1.6, 2.2, and 2.7 m 3 /s, which corresponded to estimate values of equivalent wind velocities from 10 to 48 m/s at the standard height of 10 m. A modified technique of Particle Image Velocimetry (PIV) was used to obtain turbulent pulsation averaged velocity fields of the air flow over the water surface curved by a wave. We emphasize that such remote methods permit one to obtain the air flow velocity field also below wave crests in wave troughs. Using wave phase averaging at fixed distances from the water surface, average profiles of the air flow velocity were obtained in curvilinear coordinates following the wave. Note that measurements could be reliably performed only using the PIV technique. Using contact measurements (Pitot tube), profiles of the air flow velocity over wave crests were measured in Cartesian coordinates. To compare measurement results obtained by two techniques, measurement data obtained by the PIV technique were also expressed in Cartesian coordinates. Both experimental methods yielded close results at distances of more than 10 mm from wave crests. But introducing the Pitot tube with a diameter of about 10 mm into the flow could resulted in considerable distortions of the flow close to the water surface. Parameters of the air flow (friction velocity and drag coefficient of the surface) could be further determined by extrapolating the logarithmic part of the velocity profile, and their dependencies on the wind speed could be obtained. This work was supported by the Russian Foundation of Basic Research (No. 15-35-20953 17-05-00958, 14-05-91767 AF_ɚ, 16-55-52022), providing experiments was partial supported by Russian Science Foundation (Agreement No. 14-17-00667) and processing of the data was partially supported by Russian Science Foundation (Agreement No. 15-17-20009). Also work is supported by international project Air-Sea Interaction under Stormy and Hurricane Conditions: Physical Models and Applications to Remote Sensing (ASIST) ASIST within the FP7 program and of the Federal Target Program "Research and development in priority areas of Russian
4,676
2017-01-01T00:00:00.000
[ "Environmental Science", "Physics", "Engineering" ]
Comparison of Two Low-Profile Prosthetic Retention System Interfaces: Preliminary Data of an In Vitro Study In recent years, a major research goal of companies has been to create mechanical components suitable for rehabilitation that are safer and more reliable. Evaluating their biomechanical features could be a way to improve them. The purpose of this study was to evaluate the different biomechanical features of low-profile retentive systems (Rhein®). Two different attachment systems were tested: OT Equator® Smart Box and Locator® R-TX. Once a machine was created for the simulation of the connection and disconnection of the attacks in a combined manner, it was possible to evaluate these parameters over time. Attachments were mounted in two different configurations of the divergence angle: 10◦ and 50◦. The drop retention force proved to be stable over time. The Locator® R-TX attachment experienced a more rapid decrement of the retention force than the OT Equator® Smart Box. Both tested systems experienced a high drop in retention; this drop tended to stabilize after 1.5 years of use, and it was correlated with the divergence angle. The OT Equator® Smart Box system underwent this loss of retention more gradually than the Locator® R-TX. This study demonstrates preliminary results from a bioengineering and biomechanical point of view, providing useful information for the continuous improvement of these devices and, therefore, for the quality of patients’ oral health. Background Implant-supported mandibular overdentures retained by two implants are a cost-effective treatment option for edentulous patients [1,2]. This treatment improves the stability and retention of the mandibular complete denture and patients' masticatory function compared with conventional removable dentures [2][3][4]. Retention of a removable denture is an important property that allows the forces of dislodgement to be resisted in a direction opposite to its path of placement [5,6]. Several attachment systems have been developed to improve the retention characteristics and stability of implant-supported overdentures, such as splinted (bar attachment) or unsplinted systems (o-ring/ball/spherical types, magnets, telescopic crowns, or stud attachments) [7]. The performance of implant-supported overdentures depends on the retentive capacity of the attachment system employed, providing forces that are strong enough to prevent overdenture displacement [8,9]. Biomechanical knowledge of different attachment systems could help clinicians to select the proper attachment for each case [10][11][12]. Among the attachment systems, stud attachments are widely accepted for their lower technique sensitivity, better affordability, easier repairability, and their ability to be successfully positioned on resorbed edentulous ridges [10,13]. Attachment system selection depends on a variety of factors that should be identified early in the treatment sequence, such as the alignments of the implants, the retention value needed, the available vertical and horizontal prosthetic space, and the jaw morphology [14,15]. Ultimately, the decision is usually based on the clinician's experience and preference [10,11]. Several stud attachment systems have been developed over the years, including OT Equator® (Rhein83, Bologna, Italy) and Locator® R-TX (Zest Anchors Inc, Escondido, CA, USA). The OT Equator® attachment consists of a titanium male abutment with a hard coating of titanium nitrite and a semispherical shape reminiscent of ball attachments that supports a stainless-steel retentive cap housing nylon retentive inserts available with four levels of retention encoded with a color. The OT Equator® Smart Box is a container of caps with an innovative design which, thanks to a tilting mechanism with a rotation fulcrum, allows for the passive insertion of the attachment even in conditions of divergence up to 50 • . Four types of retention caps are available: extra-soft, soft, standard, and hard. A next-generation Locator® R-TX attachment system was recently introduced to improve the limitations associated with conventional Locator attachments. The new features include an aesthetic, harder, and more wear-resistant titanium carbon nitride coating, dual-retentive features on the external surface of the abutment, and a reduction in the coronal abutment dimension. The denture attachment housings are designed to permit a 50% increase in pivoting capability and up to a 30 • correction per implant as opposed to a maximum of 20 • correction per implant with a conventional locator. Moreover, Locator® R-TX offers one set of inserts (gray = zero retention, blue = low retention, pink = medium retention, white = high retention) with improved design to resist edge deformation. Aim This study aimed to evaluate the retention force of these two attachment systems for overdenture. In particular, the study sought to evaluate the maximum force required to remove the overdenture while comparing three types of retentive caps for each attachment system over time. Results During each cycle, the maximum force of the removal phase was registered, and the average value with standard deviation was estimated for the three tests. The average retention force vs. time, in years, was plotted for each of the two different classes of attachment systems. For a divergence angle of 10 • (Figure 1a), the Locator® R-TX attachment experienced a rapid decrement of the retention force in the first half year. The value tended to stabilize after 2 years, converging, independently of the cap retention class, to a force value of 9.0 ± 0.7 N (extra-soft: 8.2 ± 3.8 N; soft: 9.4 ± 1.0 N; standard: 9.4 ± 1.7 N). The OT Equator® Smart Box attachment system experienced a more gradual change in the retention force, which tended to stabilize after 2.5 years, maintaining a different retention force for the three different cap classes (extra-soft: 7.9 ± 1.1 N; soft: 12.6 ± 1.4 N; standard: 16.8 ± 2.5 N). As reported in Figure 1b For a divergence angle configuration of 50°, both the attachment systems experience a force retention change during the first half year (Figure 2a,b). In particular, the Locator® R-TX attachment showed an abrupt change with a final average value of the retention force after 4.56 years of 17.5 ± 1.6 N with a small difference between cap retention classes (Extra-Soft: 16.5 ± 5.0 N; Soft: 16.6 ± 8.5 N; Standard: 19.4 ± 3.04 N), but maintaining a higher retention force compared to the Smart Box system. On the other hand, the Smart Box attachment tended to stabilize to a different value of the retention force. It maintained the resistance class during the time (Extra-Soft: 6.2 ± 0.1 N; Soft: 11.2 ± 0.5 N; Standard: 19.3 ± 0.5 N). Both the attachments experience a high drop in the retention force, which tended to stabilize after about 1.5 years (Figure 2b). All of the Locator® group reached up to 26.04% drop in the retention force after 4.56 years (Extra-Soft: 25.71%; Soft: 27.88%; Standard: 24.54%), while the Smart Box group revealed a higher retention force drop with respect to the 10° divergence angle configuration, but smaller compared to the Locator® attachment (Extra-Soft: 30.25%; Soft: 41.94%; Standard: 57.11%). For a divergence angle configuration of 50°, both the attachment systems experience a force retention change during the first half year (Figure 2a,b). In particular, the Locator® R-TX attachment showed an abrupt change with a final average value of the retention force after 4.56 years of 17.5 ± 1.6 N with a small difference between cap retention classes (Extra-Soft: 16.5 ± 5.0 N; Soft: 16.6 ± 8.5 N; Standard: 19.4 ± 3.04 N), but maintaining a higher retention force compared to the Smart Box system. On the other hand, the Smart Box attachment tended to stabilize to a different value of the retention force. It maintained the resistance class during the time (Extra-Soft: 6.2 ± 0.1 N; Soft: 11.2 ± 0.5 N; Standard: 19.3 ± 0.5 N). Both the attachments experience a high drop in the retention force, which tended to stabilize after about 1.5 years (Figure 2b). All of the Locator® group reached up to 26.04% drop in the retention force after 4.56 years (Extra-Soft: 25.71%; Soft: 27.88%; Standard: 24.54%), while the Smart Box group revealed a higher retention force drop with respect to the 10° divergence angle configuration, but smaller compared to the Locator® attachment (Extra-Soft: 30.25%; Soft: 41.94%; Standard: 57.11%). Both the attachments experience a high drop in the retention force, which tended to stabilize after about 1.5 years (Figure 2b). All of the Locator® group reached up to 26.04% drop in the retention force after 4.56 years (Extra-Soft: 25.71%; Soft: 27.88%; Standard: 24.54%), while the Smart Box group revealed a higher retention force drop with respect to the 10 • divergence angle configuration, but smaller compared to the Locator® attachment (Extra-Soft: 30.25%; Soft: 41.94%; Standard: 57.11%). Discussion Smart Box® is an abutment container that, thanks to a tilting mechanism with a rotation fulcrum, allows passive insertion even in extreme divergences up to 50 • . This feature allows forces passivation and, therefore, better predictability characteristics of our rehabilitation [16][17][18]. It does improve the quality of life of our patients, avoiding complex and invasive surgery, in many cases necessary to perform a fixed implant-prosthetic rehabilitation. This is one of the advantages of this systematic. As shown from Figure 3 in detail, the insertion of the Smartbox® also occurs with divergent angles. Other retentive systems, such as the Locator®, do not allow divergence angles up to 50 • , and it is, therefore, possible that residual forces are created in our prosthesis, or in the structure, or on dental implants' position. Residual forces could damage mechanical components or cause biological damages [18,19]. The Overdenture is a mobile prosthesis, on dental implants, stable and comfortable; the upper one may not have a palate plate. Many patients have difficulty keeping their removable prosthesis stable, particularly that of the jaw, or they have difficulty bearing the palate in the case of the upper arch. The dentures are removable (detachable), so they could be cleaned easily (they allow hygienic maneuvers on implants), an advantage for elderly patients with reduced mobility and with lost dexterity. At the same time, these prostheses are perfectly stable during chewing and talking. It is the simplest type of implant-prosthetic rehabilitation in which two or four dental implants are positioned in the anterior area of the jaw or the maxilla. A functional set-up is thus obtained in which the prosthesis is anchored to the implants anteriorly and rests on the mucosa [19,20]. From the obtained results in this simulation, the retention force is greater over time using the OTEquator® rather than the other systematics, especially in the case where there is disparallelism between dental implants. The drop of retention force is higher on the Locator®, and this gives a lower guarantee of duration over time and the worst predictability of oral rehabilitation. Certainly, it should be considered that this is a simulation, and the insertion and disconnection cycles have been tested in a short period that could somehow alter both the internal nylon inserts and the metal boxes themselves. Smart Box® is an abutment container that, thanks to a tilting mechanism with a rotation fulcrum, allows passive insertion even in extreme divergences up to 50°. This feature allows forces passivation and, therefore, better predictability characteristics of our rehabilitation [16][17][18]. It does improve the quality of life of our patients, avoiding complex and invasive surgery, in many cases necessary to perform a fixed implant-prosthetic rehabilitation. This is one of the advantages of this systematic. As shown from Figure 3 in detail, the insertion of the Smartbox® also occurs with divergent angles. Other retentive systems, such as the Locator®, do not allow divergence angles up to 50°, and it is, therefore, possible that residual forces are created in our prosthesis, or in the structure, or on dental implants' position. Residual forces could damage mechanical components or cause biological damages [18,19]. The Overdenture is a mobile prosthesis, on dental implants, stable and comfortable; the upper one may not have a palate plate. Many patients have difficulty keeping their removable prosthesis stable, particularly that of the jaw, or they have difficulty bearing the palate in the case of the upper arch. The dentures are removable (detachable), so they could be cleaned easily (they allow hygienic maneuvers on implants), an advantage for elderly patients with reduced mobility and with lost dexterity. At the same time, these prostheses are perfectly stable during chewing and talking. It is the simplest type of implant-prosthetic rehabilitation in which two or four dental implants are positioned in the anterior area of the jaw or the maxilla. A functional set-up is thus obtained in which the prosthesis is anchored to the implants anteriorly and rests on the mucosa [19,20]. From the obtained results in this simulation, the retention force is greater over time using the OTEquator® rather than the other systematics, especially in the case where there is disparallelism between dental implants. The drop of retention force is higher on the Locator®, and this gives a lower guarantee of duration over time and the worst predictability of oral rehabilitation. Certainly, it should be considered that this is a simulation, and the insertion and disconnection cycles have been tested in a short period that could somehow alter both the internal nylon inserts and the metal boxes themselves. Materials and Methods Two different attachment systems with three different classes of retentive caps were tested: OT Equator® Smart Box and Locator® R-TX. In Table 1 the three cap classes from the manufacturer adopted for each of the two attachment systems with the respective nominal retention force are reported [21,22]. Materials and Methods Two different attachment systems with three different classes of retentive caps were tested: OT Equator® Smart Box and Locator® R-TX. In Table 1 the three cap classes from the manufacturer adopted for each of the two attachment systems with the respective nominal retention force are reported [21,22]. The tests simulate the insertion-removal cycle of the overdenture from the attachment system evaluating the maximum force needed to detach the implant overdenture from the attachment system. Two implant replicas Core-Vent, diameter 3 mm with internal hexagon, were fixed into a dedicated specimen with auto polymerizing PMMA resin (DuraLay, GC Pattern Resin) to simulate the elastic mobility behavior of the osteointegrated implant (Figure 4a). The tests simulate the insertion-removal cycle of the overdenture from the attachment system evaluating the maximum force needed to detach the implant overdenture from the attachment system. Two implant replicas Core-Vent, diameter 3 mm with internal hexagon, were fixed into a dedicated specimen with auto polymerizing PMMA resin (DuraLay, GC Pattern Resin) to simulate the elastic mobility behavior of the osteointegrated implant (Figure 4a). The tested attachment systems (patrix) were screwed onto the implant replicas according to the instructions of the manufacturers. The OT Equator® and the Locator® R-TX were screwed with a torque in the range of 22 to 25 Ncm, adopting, respectively, the OT Equator® screwdriver (Rhein83) and the Locator® screwdriver (Zest). Afterward, the female components were incorporated into the notched surface of the matrix mounting, with the two components already connected, adopting a direct pick-up technique. Finally, the matrix mounting was connected to the load cell of an electrodynamic tensile testing machine MTS Acumen 807.001 (MTS headquarters, Eden Prairie, MN, USA) with a load cell of 1.5 kN (Figure 4b). The testing machine was adopted to induce a vertical uniaxial dislodging force to the attachment system, simulating actual clinical situations. Each retentive cap was subjected to 5000 insertion-separation cycles, assuming 4.56 years of removing and inserting the overdenture three times a day [21,22], this means that there are 1096.49 insertion cycles for a year. The cycle routine consists of 2.5 mm upwards in 2.5 seconds, 0.1 seconds of stop, and 2.5 mm downwards in 2.5 seconds with 1.5 seconds of connection on the attachment to allow the elastic recovery of the attachment components [23]. During the test, artificial saliva, Sinopia, was used as a lubricant at a constant temperature of 37 °C, simulating potential normal conditions of the oral cavity. A couple of attachments for each of the two adopted systems were mounted in two different configurations of divergence angle: the former with an angle of 10° (-5° /+ 5° from the main axis), the latter with an angle of 50° (-25° /+ 25° from the main axis). For each cap, three tests were performed, for a total number of twelve tests per divergence angle configuration. Conclusions The obtained results from this in vitro study could provide useful information for the performance improvement of retentive systems. Already the discrepancy of results in favor of the The tested attachment systems (patrix) were screwed onto the implant replicas according to the instructions of the manufacturers. The OT Equator® and the Locator® R-TX were screwed with a torque in the range of 22 to 25 Ncm, adopting, respectively, the OT Equator® screwdriver (Rhein83) and the Locator® screwdriver (Zest). Afterward, the female components were incorporated into the notched surface of the matrix mounting, with the two components already connected, adopting a direct pick-up technique. Finally, the matrix mounting was connected to the load cell of an electrodynamic tensile testing machine MTS Acumen 807.001 (MTS headquarters, Eden Prairie, MN, USA) with a load cell of 1.5 kN (Figure 4b). The testing machine was adopted to induce a vertical uniaxial dislodging force to the attachment system, simulating actual clinical situations. Each retentive cap was subjected to 5000 insertion-separation cycles, assuming 4.56 years of removing and inserting the overdenture three times a day [21,22], this means that there are 1096.49 insertion cycles for a year. The cycle routine consists of 2.5 mm upwards in 2.5 seconds, 0.1 seconds of stop, and 2.5 mm downwards in 2.5 seconds with 1.5 seconds of connection on the attachment to allow the elastic recovery of the attachment components [23]. During the test, artificial saliva, Sinopia, was used as a lubricant at a constant temperature of 37 • C, simulating potential normal conditions of the oral cavity. A couple of attachments for each of the two adopted systems were mounted in two different configurations of divergence angle: the former with an angle of 10 • (−5 • /+ 5 • from the main axis), the latter with an angle of 50 • (−25 • /+ 25 • from the main axis). For each cap, three tests were performed, for a total number of twelve tests per divergence angle configuration. Conclusions The obtained results from this in vitro study could provide useful information for the performance improvement of retentive systems. Already the discrepancy of results in favor of the Equator system is a good starting point to understand what is the ideal morphology for a retentive system with higher retention force over time.
4,275.6
2019-11-27T00:00:00.000
[ "Engineering" ]
ON PROPERTIES OF CERTAIN ANALYTIC MULTIPLIER TRANSFORM OF COMPLEX ORDER The focus of this paper is to investigate the subclasses S∗C(γ, µ, α, λ; b), TS∗C(γ, µ, α, λ; b) = T∩S∗C(γ, µ, α, λ; b) and obtain the coefficient bounds as well as establishing its relationship with certain existing results in the literature. Introduction Let A be the class of normalized analytic functions f in the open unit disc U = {z ∈ C : |z| < 1} with f (0) = f (0) = 0 and of the form f (z) = z + ∞ n=2 a n z n , a n ∈ C, (1.1) and S the class of all functions in A that are univalent in U . Also, the subclass of functions in A that are of the form a n z n , a n ≥ 0, (1.2) is denoted by T and the subclasses S * (α), C(γ) are given respectively by Moreover, the class T S * (γ) denoted by T ∩ S * (γ) which is the subclass of function f ∈ T such that f is starlike of order γ and respectively, T C(γ) is the class of function f ∈ T such that f is convex of order γ. Furthermore, the class T S * C(γ, β) which is the subclass of function f ∈ T such that f belongs the class S * C(γ, β), was studied by Altintas et al. and other researchers. For details see [ 3,5,6 ]. Using the unification in (5), Nizami Mustafa [6] introduced and investigated the class S * C(γ, β; τ ) and T S * C(γ, β; τ ), 0 ≤ α < 1; β ∈ [0, 1]; τ ∈ C which he defined as follows A function f ∈ S given by (1.1) is said to belong to the class S * C(γ, β; τ ) if the following condition is satisfied Meanwhile, the author in [4] defined a linear transformation D m α,λ f by Motivated by the work of Mustafa in [6], we study the effect of the application of the linear operator D m α,λ f on the unification of the classes of the functions S * C(γ, β; τ ). Now, we define the class S * C(γ, α, λ; b) to be class of functions f ∈ S which satisfies the condition Also, we denote by D T the subclass of the class of functions in (7) which is of the form and denote by T S * C(γ, µ, α, λ; b) = T ∩ S * C(γ, µ, α, λ; b) which is the class of functions f in (1.9) such that f belong to the class S * C(γ, µ, α, λ; b) = T ∩ S * C(γ, µ, α, λ; b). In this paper, we investigate the subclasses S * C(γ, µ, α, λ; b) and 2. Coeffiecient bounds for the classes S * C λ α (γ, µ; b) and T S * C λ α (γ, µ; b) Theorem 2.1. Let f be as defined in (1.1). Then the function D m α,λ f belongs to the class S * C(γ, µ, α, λ; b), The result is sharp for the function It suffices to show that: Simple computation in (2.1), using (1.7), we have: Which implies that (1) and the function D m α,λ f belongs to the class S * C(γ, µ, α, λ; b), The result is sharp for the function The result is sharp for the function The result is sharp for the function z n , n ≥ 2 Corollary 2.5. Let f be as defined in (1.1). Then the function D m α,λ f belongs to the class S * C(γ, µ, 1, 0, 1; b), The result is sharp for the function This result agrees with the Theorem 2.1 in [6]. Corollary 2.6. Let f be as defined in (1.1). Then the function D m α,λ f belongs to the class S * C(γ, 0, 1, λ, 0; 1), The result is sharp for the function This result agrees with the Corollary 2.1 in [6]. Corollary 2.7. Let f be as defined in (1.1). Then the function D m α,λ f belongs to the class S * C(γ, µ, 1, λ, 0; 1), The result is sharp for the function This result agrees with the Corollary 2.2 in [6]. Proof. We shall prove only the necessity part of the Theorem as the sufficiency proof is similar to the proof of Theorem 1.
1,068.2
2019-10-26T00:00:00.000
[ "Mathematics" ]
Cooperative Transmission in Mobile Wireless Sensor Networks with Multiple Carrier Frequency Offsets: A Double-Differential Approach As a result of the rapidly increasing mobility of sensor nodes, mobile wireless sensor networks (MWSNs) would be subject to multiple carrier frequency offsets (MCFOs), which result in time-varying channels and drastically degrade the network performance. To enhance the performance of such MWSNs, we propose a relay selection (RS) based double-differential (DD) cooperative transmission scheme, termed RSDDCT, in which the best relay sensor node is selected to forward the source sensor node’s signals to the destination sensor node with the detect-and-forward (DetF) protocol. Assuming a Rayleigh fading environment, first, exact closed-form expressions for the outage probability and average bit error rate (BER) of the RSDDCT scheme are derived. Then, simple and informative asymptotic outage probability and average BER expressions at the large signal-to-noise ratio (SNR) regime are presented, which reveal that the RSDDCT scheme can achieve full diversity. Furthermore, the optimum power allocation strategy in terms of minimizing the average BER is investigated, and simple analytical solutions are obtained. Simulation results demonstrate that the proposed RSDDCT scheme can achieve excellent performance over fading channels in the presence of unknown random MCFOs. It is also shown that the proposed optimum power allocation strategy offers substantial average BER performance improvement over the equal power allocation strategy. Introduction In recent years, with the rapid advances in microelectromechanical systems (MEMS) and wireless communication technologies, wireless sensor networks (WSNs) have gained an increasing research attention for their various military and civil applications, including intrusion detection, automated data collection, healthcare, and environmental monitoring [1,2].A WSN is usually composed of a large amount of low-cost and low-power sensor nodes, which are statically deployed in a certain region of interest.However, in many application scenarios, for example, wildlife protection and object tracking, due to the dynamic changes of events and environment, a purely static WSN could face severe problems, such as limited coverage and channel capacity, unfair energy usage, and increasing multiple missions [3].To handle these problems, a new class of WSNs, namely, mobile wireless sensor networks (MWSNs), has been proposed by introducing mobility to some or all the sensor nodes, and it is shown that the MWSNs outperform the static WSNs in terms of longer network lifetime, more channel capacity, enhanced coverage and targeting, and so on [3,4].Many researchers have been dedicated to exploring the aforementioned advantages of MWSNs, and great progress has been made [5][6][7][8][9][10].However, there are still numerous key technical issues that need further research, among which how to realize reliable communications between the mobile sensor nodes over fading channels stands out as critical consideration. Cooperative communications have been demonstrated as a promising technology to improve the spectral efficiency and reliability of wireless communication systems by forming a virtual antenna array among cooperating nodes [11,12].The numerous sensor nodes and the resource-constrained nature make WSNs one of the most important application 2 Mathematical Problems in Engineering fields for cooperative communications, and a variety of cooperative schemes have been proposed to improve the performance of different kinds of WSNs.These cooperative schemes generally focus on two aspects.On one hand, a number of literatures investigated the cooperative multipleinput and multiple-output (MIMO) transmission technique for WSNs, where the sensor nodes cooperate with each other to form a virtual MIMO channel.The contribution [13] first proposed a cooperative MIMO transmission scheme with Alamouti coding for WSNs, and, based on the similar virtual MIMO, various cooperative schemes employing space-time block codes (STBC) were proposed and analyzed in [14][15][16][17].On the other hand, many other researchers endeavored to design excellent selective cooperative relaying schemes.In [18], by combining relay selection with power control, a selective single-relay cooperative scheme was proposed, which can minimize the energy consumption and extend the network's lifetime.A simple geographic-based relay selective cooperative relaying protocol was proposed in [19], where the best relay can be efficiently determined by using the geographical information among nodes.The authors in [20] proposed an adaptive relay selection based cooperative scheme for a cluster-based WSN, which can guarantee the quality of service (QoS) without the needs of prior knowledge of the wireless network model and centralized control. While all these aforementioned cooperative schemes can significantly improve the performance of WSNs, the key limitation of them is that they all assume full channel state information (CSI) and perfect synchronization can be achieved.However, in actual WSNs, especially MWSNs, where the channels between the sensor nodes undergo different kinds of fading, it is challenging even impossible to obtain perfect CSI.Moreover, as the sensor nodes are evolving towards high mobility, for example, more and more sensors are deployed on ground vehicles and unmanned aerial vehicles [21,22], the perfect synchronization assumption is also not justifiable in MWSNs, where each distributed sensor node is equipped with its local oscillator and the presence of multiple carrier frequency offsets (MCFOs) can be caused by: (i) simultaneous transmissions from spatially separated sensor nodes equipped with different oscillators and (ii) Doppler shifts introduced by the relative motions between the transmit and receive sensor nodes.In such cases, the channels in MWSNs are time-varying; therefore, all these existing cooperative schemes originally developed for static WSNs inevitably suffer drastic performance degradation or even break down. In order to reduce the burden of channel estimation, noncoherent and differential cooperative transmission schemes have been proposed in [23,24].The works in [25,26] extended the differential modulation to multirelay cooperative networks and showed that full diversity could be achieved.In [27,28] the authors proposed a differential modulation (DM) and relay selection (RS) based scheme for a detect-and-forward (DetF) cooperative network (DM-RS-DetF) [27,28], and it was revealed that the DM-RS-DetF network could also achieve full diversity order.However, these schemes still assume that no CFOs exist in the networks, which make them not applicable to the MWSNs with MCFOs. The approaches for dealing with CFOs in communication systems can be generally classified into two main categories; the first one focuses on estimating and compensating the CFOs by designing excellent estimators, while the second one resorts to developing novel techniques which are robust to the CFOs.Recently, in an effort to eliminate the impact of the MCFOs, a number of MCFOs estimators have been proposed for both amplify-and-forward (AF) and decode-andforward (DecF) cooperative networks [29][30][31][32].Nevertheless, as observed in these literatures, the estimators as well as the transceivers are generally quite computationally complex and the overheads consumed by the parameters estimation are always significant, which limit their deployment on resourceconstrained sensor nodes.Moreover, it is also a challenge task to reliably feedforward/feedback the CSI or MCFOs estimates to different sensor nodes.On the other hand, imperfect MCFOs estimates and compensation still degrade the network performance.Hence, given all these reasons, it may be more practical to address the MCFOs by developing robust transmission techniques rather than by designing estimators to estimate and compensate the MCFOs in actual MWSNs. In this paper, we consider a MWSN over Rayleigh fading channels, where all the wireless links among the sensor nodes are perturbed by different random MCFOs.This is a practical scenario which has scarcely appeared in the literature.Instead of devising complicated CFO estimator, we propose to address the MCFOs in the MWSN employing doubledifferential (DD) modulation technique, which was originally proposed by Okunev [33].The major advantage of the DD modulation is its bypass of the CFO and channel estimation, and it has turned out to be a powerful technique to cope with unknown CFOs in a number of cooperative systems [34][35][36][37][38].The AF and DecF based DD cooperative systems were introduced and analyzed in [34] and [35], respectively.The authors in [36] proposed a selective DecF protocol, which could compensate the signal-to-noise ratio (SNR) loss in a single-relay DD cooperative system.To further improve the channel utilization, a low complexity piecewise linear (PL) decoder for the DecF based DD cooperative system was designed in [37], and it was shown that the proposed PL decoder could achieve full spatial diversity, while, in [38], the authors investigated the AF based DD multirelay networks and presented an effective relay selection strategy to improve the system performance. Motivated by the excellent performance of these schemes, we herein focus on the DetF relaying protocol and develop a robust relay selection (RS) based double-differential cooperative transmission scheme (RSDDCT) to enhance the performance of the MWSN under consideration.In our scheme, a simple and effective Max-Min relay selection strategy is applied to reduce the energy consumption of the network, through which only the best relay sensor is selected to forward the source sensor's double-differential modulated signals to the destination sensor with the DetF protocol.For simplicity, we show MWSN with the proposed RSDDCT scheme RSDDCT-DetF network in the remainder of this paper.To facilitate the performance characterization, we present a comprehensive performance analysis for the RSDDCT-DetF network.In this light, we derive exact closedform expressions for the outage probability and average bit error rate (BER), along with their asymptotic expressions in the high SNR regime.Simulation results show that the RSDDCT-DetF network can achieve excellent performance over fading channels in the presence of random MCFOs.While the contributions [34][35][36][37][38] have significantly improved our knowledge on the DD cooperative systems, the most important differences between our work and them are as follows.(1) In [34][35][36][37], the authors only focused on single-relay scenarios, whereas WSNs are generally modeled as multirelay networks; moreover, all the analytical results were limited to the error probability and only lower or upper bounds but no exact expressions were derived; (2) In [38], only AF multirelay systems were considered, and the proposed relay selection strategy is not applicable to regenerative networks; in addition, there was no analytical result on the system performance.To the best of the authors' knowledge, there is no previous work on regenerative multirelay cooperative networks with DD modulation.The main contributions of the paper can be summarized as follows. (i) We propose addressing the MCFOs in MWSNs employing DD modulation technique, which is practical but has not been reported in the literature.More specifically, we develop a robust relay selection based double-differential cooperative transmission scheme, termed RSDDCT, to enhance the performance of MWSNs with MCFOs.(ii) Assuming a Rayleigh fading environment, we derive exact closed-form expressions for the outage probability and average BER of the RSDDCT-DetF network at arbitrary SNRs, which provide a fast and efficient means to evaluate the performance of the MWSNs employing the proposed scheme.(iii) In order to gain further insights into the impact of system parameters, such as fading parameters and number of relay sensors, we look into the high SNR regime and present simple and informative high SNR approximations for the outage probability and average BER, which reveal that the MWSN with the proposed scheme can achieve full diversity order.(iv) Based on the derived analytical expressions, we formulate an interesting optimization problem which seeks to minimize the average BER.In particular, we consider power allocation among the source sensor and the relay sensors under a total transmit power constraint, and simple closed-form solutions are derived.Comparisons based on simulations demonstrate that a significant performance improvement is achieved using the optimum power allocation compared to the equal power allocation, which provide an effective method to improve the MWSN's performance under a fixed power budget. The rest of the paper is organized as follows.Section 2 introduces the DD modulation and the system model of the RSDDCT-DetF network.In Section 3, we derive closed-form expressions for the outage probability and average BER of the RSDDCT-DetF network.The asymptotic system behaviors and the power allocation strategy are provided in Sections 4 and 5, respectively.Finally, Section 6 presents our numerical results, and Section 7 concludes the paper. System Model where (0) = (0) = 1 and is the number of the symbols to be sent within a frame.Consider a fading channel with random CFO; the received signals can be expressed as where is the transmit power, ℎ is the channel fading coefficient, ∈ [−, ) is the unknown normalized random CFO in radians, and () ∼ (0, 0 ) is the additive white Gaussian noise (AWGN), with (, 2 ) representing a complex Gaussian random variable with mean and variance 2 .The optimal decoder for DDBPSK signals is the maximum likelihood decoder (MLD), which is given by [39] where () = () * (−1).It is noted from (3) that the MLD for DD signals can be regarded as a differential detector for the equivalent single-differential signals (), which is clearly depicted in Figure 1(b).Thus, the BER performance of DD signals can be characterized by (), which is given by where Δ() = arg(()) − arg(( − 1)).It is shown that the instantaneous SNR of () can be approximated as where = |ℎ| 2 / 0 .Based on the SNR approximation, the BER performance of the DDBPSK modulation can be evaluated as [34] RSDDCT-DetF Mobile Wireless Sensor Networks.Consider a MWSN as shown in Figure 2(a), where a source sensor node () communicates with a destination sensor node () with the assistance of a number of potential relay sensor nodes ( = 1, . . ., ).We suppose that the channel coefficients ℎ (between and ), ℎ (between and ), and ℎ (between and ) are all flat Rayleigh fading coefficients.In addition, ℎ , ℎ , and ℎ are mutually independent and nonidentical.It is assumed that all the links are perturbed by independent random CFOs ( , , and ) and the fading coefficients keep constant within one frame and independently change from one frame to another, which are modeled as ℎ ∼ (0, 2 1 ), ℎ ∼ (0, 2 2 ), and ℎ ∼ (0, 2 3 ).We also assume here that all the AWGN terms of all links have zero mean and equal two-sided spectral density ( 0 /2).The transmit powers of the source sensor and each relay sensor are 1 = and 2 = (1 − ), respectively, where = 1 + 2 denotes the total transmit power of the network and ∈ (0, 1) is the power allocation factor.In this respect, the MWSN under consideration can be further described by a more informative block model, which is presented in Figure 2(b). Suppose that each transmission frame is of length 2, where is the number of the data symbols transmitted from each sensor node within a frame.During the first phase, broadcasts a stream of signal sequence of length to all the relay sensors and ; the received symbols at the th relay sensor and can be expressed as where and are the random CFOs; () and () denote the AWGN at the th relay and , respectively.In this paper, we assume that all the CFOs follow uniform distribution and remain constant over at least three consecutive symbol intervals.However, it should be pointed out that, in general, there is no restriction over the probability distribution of the CFOs and they could have any probability distribution. In the second phase, only the relay sensor with the best link is selected to forward the remodulated signals to .Assuming that relay sensor is selected, the received symbols at can be written as where denotes the CFO of the second-link, x () represents the remodulated symbols at , and () is the AWGN of the selected - link. To take into account the detection errors at the relay sensors, we apply the one-hop equivalent link model developed in [25].As shown in Figure 2(b), the equivalent one-hop links are denoted by , = 1, . . ., , and the SNR of each equivalent link can be approximated as where instantaneous SNR of the - and - links, respectively, with γ = / 0 denoting the average SNR.It is worth pointing out that the same bounding technique has been widely adopted in the performance analysis of various relay systems; see [25,27,28] and references therein.In addition, it has been demonstrated that this lower bound is in general very tight for a wide range of SNR. Based on the equivalent links, the best relay sensor is selected according to the Max-Min criterion given by = arg max where the instantaneous SNR of the worst one of the twohop relay link is maximized.To perform relay selection at the relay sensors, the source sensor and destination sensor should send request-to-send (RTS) and clear-to-send (CTS) packets before each transmission, respectively.Based on the RTS and CTS packets, each relay sensor first estimates the amplitudes of the channels from the source and destination sensors to calculate , and the equivalent SNR ; then a backoff timer is set to be inversely proportional to ; therefore, the best relay sensor with the largest equivalent SNR has the smallest backoff time and consequently occupies the channel to forward its signals to the destination sensor, and all other relay sensors keep silent in this transmission interval. In order to efficiently combine the two received signals, we propose a new linear combiner to process the DD signals, where the two branches are combined as in which the weights of the two branches are chosen to maximize the output SNR of () [25] and and denote the equivalent SNR and second-hop SNR of the selected link, respectively.After combination, the total output SNR can be rewritten as where is the instantaneous SNR of the direct link. Performance Analysis In this section, we present a comprehensive performance analysis for the RSDDCT-DetF network described in the previous section.Specifically, we derive exact closed-form expressions for two important performance metrics, that is, outage probability and average BER. Preliminaries. Before delving into the detailed analysis, we first present the statistical behaviors of and , which will be frequently invoked in the subsequent derivations. As all the channels undergo Rayleigh fading, the PDFs and CDFs of , , and can be written as [40] where , and γ = (1 − ) 2 3 γ .Using ( 16) as a starting point, we can derive the statistic properties of as the following lemma. Lemma 1.The PDF and CDF of are given by where )) γ and −1 denotes the binomial coefficient. Proof.As is the smaller one between and , we can write its CDF as where . Setting derivative of (19) with respect to , the PDF of can be derived as Note that (10) suggests that is essentially the largest order statistic of ; hence, by utilizing [41, Equation (2.1.1)],we can obtain the CDF in (18); then differentiating it with respect to and with the help of the binomial theorem, the PDF (17) can be derived. Outage Probability. The outage probability is defined as the probability that the end-to-end SNR drops below a certain threshold ℎ .With the help of ( 12), the outage probability can be rewritten as A closed-form outage probability of the network can be derived as the following theorem. Theorem 2. The outage probability of the RSDDCT-DetF network is given by where γ and γ are the average SNR of the direct and equivalent links, respectively.Proof.By utilizing the law of total probability, the outage probability (21) can be calculated as out = ∫ ℎ 0 () ( ℎ − ); substitute (15) and ( 17) into (21); and, with the help of the binomial theorem, Theorem 2 can be derived. Average Bit Error Rate. We now turn our attention to the average BER performance of the network.According to [42, Equation (12.1-13)], the conditional BER of the differential BPSK modulation with -channel diversity reception is given by where is the instantaneous total SNR at the receiver and is defined as With the help of ( 5) and ( 12), the conditional BER of the RSDDCT-DetF network with DDBPSK can be approximated by the conditional BER of the 2-channel reception DBPSK, which arrives at where = ( + )/2−1/2 denotes the total instantaneous SNR of the equivalent single-differential signals. Theorem 3. The closed-form average BER of the RSDDCT-DetF network with DDBPSK modulation is given by ] . (26) Proof.Due to the independence of and , the average BER can be obtained by averaging (25) with respect to and , respectively, as follows: We first calculate the first part as ) where the second equality is derived by using [43,Equation (3.312)] and Γ() = ∫ ∞ 0 − −1 is the Gamma function.On the other hand, substituting (15) and (17), with the help of [43,Equation (3.351)], the second part of P can be simplified into ) To this end, Theorem 3 is obtained by combining ( 28) and ( 29). Note that Theorems 2 and 3 present accurate expressions for the outage probability and average BER of a network with arbitrary number of relay sensors, and the formulas (22) and (26) only involve standard functions which can be directly calculated, thereby providing fast and efficient means to evaluate the system performance. Asymptotic Behaviors Analysis Although the expressions for the outage probability and average BER derived in Theorems 2 and 3 enable numerical evaluation of the system performance and may not be computationally intensive, they do not offer physical insights into the impact of the system parameters, for example, fading parameters and number of relay sensors.To circumvent this, we now focus on the high SNR regime to analyze the asymptotic behaviors of the outage probability and average BER. Corollary 4. The asymptotic outage probability of the RSDDCT-DetF network in the high SNR regime is given by where 1 ( ℎ , , 1 , ) = Proof.Making use of the Maclaurin series expansion of the exponential function, ( ) and ( ) can be approximated in the high SNR region as where the high-order terms are ignored.Substituting ( 31) into ( 22), the outage probability can be rewritten as where we have utilized the identity ∑ 3 )) γ in (32), we can rewrite the outage probability as where 3 ).Note that the second term in (33) can be ignored when γ is large enough, which yields Corollary 4. Corollary 5. The asymptotic average BER of the RSDDCT-DetF network with DDBPSK modulation at high SNRs is given by 3 ), is the number of relay sensors, and γ denotes the average SNR. Proof.It is obvious that the first part of P can be approximated at sufficiently high SNRs as To obtain the asymptotic expression of P 2 , we first denote the sum term of (29) by , as Letting γ = 1 + γ /2, after some manipulations, we have where the second equality is derived in a way similar to [27,Equation (31)].By using [43, Equation (0.154.4)] to calculate the sum term, (37) can be approximated at high SNRs as Recall that γ ≈ γ /2; we then have ≈ !2 +1 ( γ ) −( +1) ; thus the second part of P can be approximated as Adding (35) to (39), we can rewrite the average BER as Substituting 3 )) γ in (40), the asymptotic average BER can be expressed as To this end, Corollary 5 is proved. Corollaries 4 and 5 demonstrate that a RSDDCT-DetF network with relay sensors can achieve a full diversity order of + 1 at sufficiently high SNRs.The above expressions (30) and (34) also reveal straightforwardly the impact of the model parameters on the system performance.More specifically, we can see that, by increasing and the sensor nodes' power, the outage probability and average BER will reduce.In addition, we can also observe that the diversity of the MWSN is determined by the number of relay sensors.We will show through numerical evaluation in Section 6 that both the asymptotic outage probability (30) and the average BER (34) tightly correlate with their exact analytical counterparts in the high SNR regime; thus we can precisely predict the system performance through Corollaries 4 and 5 at sufficiently high SNRs. Optimum Power Allocation The optimum power allocation for improving the system performance (e.g., minimizing the outage probability or error probability) has been a very hot research area [24,26,34].As MSNs are generally power-limited systems, it may be of particular importance to investigate the optimum power allocation for MSNs.With this observation in mind, in this section, we address the power allocation issue for the RSDDCT-DetF sensor network under consideration to improve its average BER performance. Having the closed-form average BER (26), we are about to investigate the power allocation among the source sensor and the relay sensors to minimize the average BER under a total transmit power = 1 + 2 .The optimization problem of the power allocation can be formulated as However, it is generally difficult to directly manipulate the exact average BER expression (26), and the optimum solution can only be derived through exhaustive search.In order to obtain a simple closed-form solution, we choose to look into the high SNR regime and determine the optimum power allocation scheme by use of the asymptotic average BER expression.Note that, given a fading scenario, 2 ( , 1 , 2 , 3 )/ γ +1 in (34) is a constant which only depends on the average SNR and the number of relay sensors; therefore, based on the asymptotic average BER expression, the optimization problem of the power allocation can be rewritten as Note that ( 43) is an equation with only one variable, namely, ; thus, by differentiating the objective function in (43) with respect to and setting the derivative equal to zero, we can derive the optimum power allocation factor op as follows. Case A. For the scenario 2 2 = 2 3 , the optimum power allocation factor is Case B. For the scenario 2 2 ̸ = 2 3 , the optimum power allocation factor is where 3 ). We observe from (44) and (45) that the equal power allocation always cannot provide the best average BER performance and the optimum power allocation is dependent on the channel variances, that is, 2 2 , 2 3 and the number of relay sensors.More specifically, in the case 2 2 = 2 3 , the optimum power factor op is larger than 1/2 and when → +∞, the equal power allocation yields the best average BER performance.It can be easily shown that, for the scenario 2 2 > 2 3 , the best power allocation factor op is smaller than ( +1)/(2 +1), and when 2 2 ≫ 2 3 , we further have op < 1/2, which indicates that more power should be allocated to the relay sensors.On the other hand, when it comes to the scenario 2 2 < 2 3 , the optimum power factor op is larger than 1/2, which suggests that more transmit power should be allocated to the source sensor. Numerical Results In this section, the theoretical results derived in the previous sections are validated by a set of Monte Carlo simulations, where we consider slow Rayleigh fading channels with random CFOs, and the transmission length is set to 100.Throughout our simulations, we suppose that all the CFOs are mutually independent and uniformly distributed over [−2 , 2 ], where denotes the maximum normalized CFO.Without special explanation, the transmit power is always equally split between the source sensor and the relay sensors, that is, 1 = 2 = /2.Note that the SNR refers to the average SNR γ in the following discussions. We first examine the outage performance of the RSDDCT-DetF network.Figures 3 and 4 demonstrate the simulated outage probabilities along with the accurate outage probabilities in (22) and their corresponding asymptotic approximations in (30).In our simulations, the predetermined SNR threshold is ℎ = 10 dB, and the maximum CFO is = 0.5.In Figure 3, we consider three possible numbers of the relay sensors, namely, = 1, 2, 4, respectively, and the fading gains are equal, that is, 2 1 = 2 2 = 2 3 = 1.In Figure 4, is fixed to 3, and four different fading scenarios are investigated.As can be seen from the two figures, the accurate analytical curves match well with the simulation results across the entire SNR range in all the scenarios.Moreover, the proposed asymptotic outage probabilities yield excellent tightness at high SNRs.Likewise, the analytical outage probability expressions can very efficiently predict the exact outage probability.Another observation is that the achieved diversity order is dependent on the number of relay sensors, and it increases when more relay sensors are used.Specifically, about 5 dB performance improvement can be observed at an outage probability of 10 −3 as increases from 1 to 2. In addition, we can obtain a performance gain of about 4 dB at the same outage probability by increasing from 2 to 4. In Figures 5 and 6, we proceed to illustrate the average BER performance of the RSDDCT-DetF network.We have performed the simulations in the same scenarios as these of Figures 3 and 4. In the two figures, the simulated average BER curves are plotted along with the analytical average BER given in (26).Also, the results pertaining to the asymptotic expression for the average BER given by (34) are included in the figures.Note that, for the sake of completeness, the simulated average BER curve of the noncooperative or direct transmission system ( = 0) is also plotted to serve as a benchmark.It is worth noting that, to fairly compare the performances, we assume that the total transmit powers are equal; in other words, the transmit power of in the case = 0 is twice that of the other systems.As observed, the analytical expression is in great match with the simulated results, even in the low SNR region in each scenario.In addition, for all the cases, the asymptotic BERs and their other two corresponding curves overlap at high SNRs, which indicate the correctness of our theoretical analysis.In Figure 5, it is clear that each cooperative network outperforms the noncooperative network when SNR exceeds a certain threshold.We can also observe from the two graphs that the achievable diversity order of the network depends on the number of relay sensors, and it is increasing with the number of the potential relay sensors.To be specific, we can observe about 3.5 dB performance improvement at an average BER of 10 −3 when the number of relay sensors increases from 1 to 2. And an additional 2.7 dB gain is obtained at the same average BER when increases from 2 to 4. Figure 7 compares the average BER performance of the RSDDCT-DetF network and the DM-RS-DetF network with DBPSK modulation, where five different random CFOs are In particular, we have derived closed-form expressions for the outage probability and average BER performance of a RSDDCT-DetF network at arbitrary SNRs.Moreover, simplified asymptotic outage probability and average BER expressions in the high SNR regime are deduced, which indicate that a RSDDCT-DetF network consisting of a source sensor, relay sensors, and a destination sensor can achieve a full diversity order of + 1 at sufficiently high SNRs.We have shown that in the RSDDCT-DetF networks, the destination sensors are able to detect their data without any knowledge of the channel fading coefficients or MCFOs.It is revealed that the RSDDCT-DetF network is inferior to its single-differential counterpart in the absence of MCFOs; however, the RSDDCT-DetF network performs well over fading channels with random MCFOs, where the singledifferential networks fail to work.We have also investigated the power allocation optimization problem to improve the average BER performance based on the derived analytical expressions.Monte Carlo simulations show that our optimum power allocation strategy provides considerable average BER performance enhancement as compared to the equal power allocation strategy. 1 N R = 4 N R = 2 Figure 3 : Figure 3: Outage probabilities of the RSDDCT-DetF network using different numbers of relay sensors.
7,287
2014-04-03T00:00:00.000
[ "Computer Science", "Engineering" ]
Looking through the mirror : Optical microcavity-mirror image photonic interaction Although science fiction literature and art portray extraordinary stories of people interacting with their images behind a mirror, we know that they are not real and belong to the realm of fantasy. However, it is well known that charges or magnets near a good electrical conductor experience real attractive or repulsive forces, respectively, originating in the interaction with their images. Here, we show strong interaction between an optical microcavity and its image under external illumination. Specifically, we use silicon nanospheres whose high refractive index makes well-defined optical resonances feasible. The strong interaction produces attractive and repulsive forces depending on incident wavelength, cavity-metal separation and resonance mode symmetry. These intense repulsive photonic forces warrant a new kind of optical levitation that allows us to accurately manipulate small particles, with important consequences for microscopy, optical sensing and control of light by light at the nanoscale. ©2012 Optical Society of America OCIS codes: (350.4855) Optical tweezers or optical manipulation; (160.3918) Metamaterials; (290.4020) Mie theory. References and links 1. J. D. Jackson, Classical Electrodynamics (John Wiley & Sons, Inc, 1962). 2. E. H. Brandt, “Levitation in physics,” Science 243(4889), 349–355 (1989). 3. I. V. Lindell, E. Alanen, and K. Mannersalo, “Exact image method for impedance computation of antennas above the ground,” IEEE Trans. Antenn. Propag. AP-33, 937–945 (1984). 4. R. Fenollosa, F. Meseguer, and M. Tymczenko, “Silicon colloids: from microcavities to photonic sponges,” Adv. Mater. (Deerfield Beach Fla.) 20(1), 95–98 (2008). 5. E. Xifré-Pérez, R. Fenollosa, and F. Meseguer, “Low order modes in microcavities based on silicon colloids,” Opt. Express 19(4), 3455–3463 (2011). 6. E. Xifré-Pérez, F. J. García de Abajo, R. Fenollosa, and F. Meseguer, “Photonic binding in silicon-colloid microcavities,” Phys. Rev. Lett. 103(10), 103902 (2009). 7. A. García-Etxarri, R. Gómez-Medina, L. S. Froufe-Pérez, C. López, L. Chantada, F. Scheffold, J. Aizpurua, M. Nieto-Vesperinas, and J. J. Sáenz, “Strong magnetic response of submicron silicon particles in the infrared,” Opt. Express 19(6), 4815–4826 (2011). 8. N. Engheta, “Circuits with light at nanoscales: optical nanocircuits inspired by metamaterials,” Science 317(5845), 1698–1702 (2007). 9. R. Merlin, “Metamaterials and the Landau-Lifshitz permeability argument: large permittivity begets highfrequency magnetism,” Proc. Natl. Acad. Sci. U.S.A. 106(6), 1693–1698 (2009). 10. C. M. Soukoulis, M. Kafesaki, and E. N. Economou, “Negative index materials: new frontiers in optics,” Adv. Mater. (Deerfield Beach Fla.) 18(15), 1941–1952 (2006). 11. D. R. Smith, J. B. Pendry, and M. C. K. Wiltshire, “Metamaterials and negative refractive index,” Science 305(5685), 788–792 (2004). 12. M. Burresi, D. van Oosten, T. Kampfrath, H. Schoenmaker, R. Heideman, A. Leinse, and L. Kuipers, “Probing the magnetic field of light at optical frequencies,” Science 326(5952), 550–553 (2009). 13. A. Ashkin, “Acceleration and trapping of particles by radiation pressure,” Phys. Rev. Lett. 24(4), 156–159 (1970). 14. A. Ashkin and J. M. Dziedzic, “Observation of resonances in the radiation pressure on dielectric spheres,” Phys. Rev. Lett. 38(23), 1351–1354 (1977). #164605 $15.00 USD Received 13 Mar 2012; revised 4 Apr 2012; accepted 5 Apr 2012; published 1 May 2012 (C) 2012 OSA 7 May 2012 / Vol. 20, No. 10 / OPTICS EXPRESS 11247 15. A. Ashkin and J. M. Dziedzic, “Optical trapping and manipulation of viruses and bacteria,” Science 235(4795), 1517–1520 (1987). 16. K. Dholakia, P. Reece, and M. Gu, “Optical micromanipulation,” Chem. Soc. Rev. 37(1), 42–55 (2007). 17. D. G. Grier, “A revolution in optical manipulation,” Nature 424(6950), 21–27 (2003). 18. F. M. Fazal and S. M. Block, “Optical tweezers study life under tension,” Nat. Photonics 5(6), 318–321 (2011). 19. M. Righini, A. S. Zelenina, C. Girard, and R. Quidant, “Parallel and selective trapping in a patterned plasmonic landscape,” Nat. Phys. 3(7), 477–480 (2007). 20. M. L. Juan, M. Righini, and R. Quidant, “Plasmon nano-optical tweezers,” Nat. Photonics 5(6), 349–356 (2011). 21. R. Quidant and C. Girard, “Surface-plasmon-based optical manipulation,” Laser Photon. Rev. 2(1-2), 47–57 (2008). 22. M. Righini, G. Volpe, C. Girard, D. Petrov, and R. Quidant, “Surface plasmon optical tweezers: tunable optical manipulation in the femtonewton range,” Phys. Rev. Lett. 100(18), 186804 (2008). 23. G. Volpe, R. Quidant, G. Badenes, and D. Petrov, “Surface plasmon radiation forces,” Phys. Rev. Lett. 96(23), 238101 (2006). 24. M. Righini, P. Ghenuche, S. Cherukulappurath, V. Myroshnychenko, F. J. García de Abajo, and R. Quidant, “Nano-optical trapping of Rayleigh particles and Escherichia coli bacteria with resonant optical antennas,” Nano Lett. 9(10), 3387–3391 (2009). 25. P. W. Barber and S. C. Hill, Lights Scattering by Particles: Computational Methods (World Scientific, Singapore, 1990). 26. T. Sannomiya and C. Hafner, “Multiple multipole program modelling for nano plasmonic sensors,” J. Comput. Theor. Nanoscience 7(8), 1587–1595 (2010). 27. L. Novotny, D. W. Pohl, and B. Hecht, “Scanning near-field optical probe with ultrasmall spot size,” Opt. Lett. 20(9), 970–972 (1995). 28. F. J. García de Abajo, “Multiple scattering of radiation in clusters of dielectrics,” Phys. Rev. B 60(8), 6086–6102 (1999). 29. E. Palik, Handbook of Optical Constants of Solids (Academic Press, New York, 1985). 30. R. Zhao, P. Tassin, T. Koschny, and C. M. Soukoulis, “Optical forces in nanowire pairs and metamaterials,” Opt. Express 18(25), 25665–25676 (2010). 31. X. Yang, Y. Liu, R. F. Oulton, X. Yin, and X. Zhang, “Optical forces in hybrid plasmonic waveguides,” Nano Lett. 11(2), 321–328 (2011). 32. F. J. García de Abajo, “Momentum transfer to small particles by passing electron beams,” Phys. Rev. B 70(11), 115422 (2004). 33. K. M. Hurst, C. B. Roberts, and W. R. Ashurst, “A gas-expanded liquid nanoparticle deposition technique for reducing the adhesion of silicon microstructures,” Nanotechnology 20(18), 185303 (2009). 34. J. N. Israelachvili, Intermolecular and Surface Forces (Academic, London, 1992). 35. A11 equal to 31⨉10 −20 J; A22 equal to 6.5⨉10 ; A33 equal to 4⨉10 . Introduction The theory of image charges offers a useful method for solving the electric and magnetic field distributions of either a charge or a magnet near the flat surface of a perfect electric conductor (PEC) [1].In particular, the field distribution for a charge is obtained by replacing the conductor by a fictitious charge placed at the mirror image of the real charge, but with opposite sign (Fig. 1(a)).For a magnet near a PEC surface, the mirror magnet has a magnetization vector that is just the specular reflection of the magnetization vector produced by the real source (Fig. 1(b)).We therefore conclude that the force between a charge and a PEC surface is always attractive, whereas the force between a magnet and a PEC surface is always repulsive.This simple analysis finds application to electrostatic adsorption and magnetic levitation [2]. Here, we show that the method of charge and magnet images can be extended for understanding the effect of flat metallic surfaces on the modes of neighbouring dielectric microcavities, and theoretically show the application of these concepts to a new form of optical levitation.We follow a procedure similar to the simulation of electromagnetic (EM) modes in resonant antenna near earth ground [3].Specifically, we use silicon colloids [4] as high-refractive-index nanometre-sized spherical microcavities because they display welldefined, sharp Mie resonances with huge scattering cross-sections [5,6].The microcavity modes can be envisaged as electric and magnetic oscillating multipoles.The presence of a flat metallic surface induces mirror EM multipole images.In particular, the electric-dipole (ED) mode parallel to the surface is the easiest to understand: in an electrostatic picture, the metallic surface induces an imaginary ED with antiparallel dipole moment (see Fig. 1(c)), and consequently, the microcavity undergoes attraction towards the conductor.In contrast, a parallel magnetic dipole (MD) induces a mirror parallel magnetic dipole, which results in a repulsive photonic force (see Fig. 1(c)).Generally speaking, multipolar optical modes in microcavities near good conductors induce attractive or repulsive forces depending on the type of resonance that is involved.The net photonic force acting on the microcavity under illumination by a laser impinging perpendicularly to the metal surface emerges from the balance between attractive and repulsive forces acting on it.The photonic force is stronger for modes producing larger scattering cross-section (e.g., modes in cavities with high refractive index such as our silicon microspheres).However, magnetic (TM) and electric (TE) resonances usually are close in wavelength, and consequently, attractive and repulsive forces tend to cancel out, except for low-order modes, which tend to be well separated (see Fig. 2).More precisely, magnetic-like photonic forces are dominant for low-order TM modes with strong scattering (e.g., the fundamental TM mode b11, see Fig. 2) and well apart from TE modes.TM modes also exhibit large magnetic response [7].Here we demonstrate a dominant magnetic repulsive force over the electrical attractive force in silicon nanospheres when the external light is tuned near the b11 resonance.This repulsive photonic force is larger than other competing forces such as van der Waals (vdW) attraction and Brownian motion.(E) and (H) fields in the x-y plane (the planar surface is at z = 0).The lower inset shows the direction of (E) and (H) fields in the image, along with the corresponding electric and magnetic image dipoles.We also show the direction of the forces acting on the optical cavity originating in electric and magnetic dipoles. Magnetism at optical frequencies has recently been achieved through elaborate metamaterials designs [8][9][10][11], and even direct evidence of the resulting magnetic field has been reported using an artificial magnetic atom attached to a probing tip [12].In this context, our silicon nanospheres constitute a new class of magnetic atoms, capable of amplifying the magnetic component of the EM waves at optical frequencies, which in turns produces repulsive forces near a metal surface. Photonic forces are extensively used in optical tweezers [13][14][15] for manipulating microparticles [16][17][18].Recently, plasmons have been realized to produce intense gradient forces capable of trapping small nanoparticles [19][20][21][22][23][24].In these studies, low-refractive-index dielectrics are used, producing relatively weak scattering [25].This is in contrast to our highindex nanospheres, from which we derive completely new phenomena under external illumination for wavelengths in resonance with low-order magnetic and electric optical modes near a metallic film.In this work, we show that EM field distribution of the low-order modes of a spherical optical microcavity near a metallic surface under external laser irradiation is identical to the EM field of a particle dimer in which each particle is illuminated with counter-propagating laser beams, as deduced from the mirror image method discussed above.We also report strong repulsive photonic forces four orders of magnitude larger than the particle weight (F = mg = 1.2⨉10 −3 pN) over a broad range of laser wavelengths.Importantly, the optical force acting on a silicon nanosphere can be tuned from repulsive to attractive and vice versa by selecting either the light wavelength or the gap between the sphere and the metal. Numerical methods We calculate both EM fields and optical forces through two methods; a) by rigorously solving Maxwell's equations using a highly convergent multiple-scattering method based upon a multipolar expansion of the fields near the sphere and an expansion in plane waves elsewhere, including reflection at the planar metal surface (MESME).This is similar to previous approaches based on the multiple multipole method [26,27] and specific implementations for aggregates of up to thousands of spherical particles [28]; b) by finite difference time domain simulation (FDTD) calculation using the public software (Lumerical FDTD Solutions).The results from both methods are in full agreement with each other.The dielectric constant of Au is taken from Ref [29], while we assume a frequency independent refractive index of 3.5 and 1.6 for silicon and polystyrene (PS), respectively.The radius of the silicon sphere is set to 230 nm, so that the fundamental optical modes appear in the near-infrared (NIR) region.The optical force is obtained upon integration of the Maxwell stress tensor [30,31], which results in an analytical formula in terms of the multipolar field components near the sphere [32].A plane EM wave at normal incidence with 10 mW/ߤm 2 is presumed.However, due to the small size of the microcavity only a tiny power fraction (1.6 mW) is directly impinging on the sphere. Results and discussion We start by inspecting the EM field distribution in the microcavity placed near a PEC surface, as shown schematically in the insets of Fig. 3(b).Because of the high refractive index of silicon, a sphere as small as 0.46 ߤm is capable of supporting well-defined low-order Mie resonance modes (as shown in Fig. 2).The lowest-order Mie mode resonances (b11 and a11 in Fig. 2) correspond to transverse magnetic (TM) and transverse electric (TE) modes.We have plotted the EM field distribution for two wavelengths (ߣ = 1434 nm and ߣ = 1744 nm) at which TM and TE are dominant (see below).The upper plots of Fig. 3(a) (TM dipole, TMD), show the distribution of Ex, Ez and Hy fields at 1434 nm, where the TM mode b11 is dominant.In particular, Ex shows a typical dipole feature, which demonstrates that the incident light induces an electric dipole resonance.Interestingly, Ez shows strong enhancement in the gap region, which illustrates the strong coupling existing between the sphere and the metal surface.The Hy component indicates that the incident light also induces a very strong magnetic dipole resonance in the silicon sphere.Both modes contribute to the scattering cross-section but compete with each other to produce a net photonic force (see Fig. 3(b)).The lower plots of Fig. 3(a) show the EM field at a wavelength of 1744 nm, for which the TE mode becomes dominant (TE dipole, TED). Figure 3(c) shows the EM field distribution at the same wavelengths as in Fig. 3(a), but for the case of a dimer consisting of two identical microcavities located symmetrically with respect the (now absent) metal interface and illuminated by counter-propagating laser beams, phase-shifted by ߨ with respect to each other, as suggested by direct application of the mirror image method.The EM fields in both microcavities at ߣ = 1434 nm (upper plots) and ߣ = 1744 nm (lower plots) have a plane of symmetry at the planar surface, while the fields in the upper semi-space are identical to those of Fig. 3(a).Comparison between the full calculation and the image method for different modes and parameters thus results in identical field distributions.Therefore, the image method can be extended to electrodynamics when microcavities near a perfect conductor are considered.and Hy field components within the x-z plane for a silicon nanosphere near a PEC surface at wavelengths of 1434 nm (TMD, upper panels) and 1744 nm (TED, lower panels).(b) Optical force along the z direction (black solid curve obtained from FDTD, grey dash line curve obtained from MESME, red dotted curve obtained from integration only the magnetic part of the Maxwell tensor, blue dotted curve obtained from integration of the electric part of Maxwell tensor [30,31]; left axis) and maximum of the Ez and Hy fields (blue and red dashed curves, respectively; right axis) acting on a silicon sphere separated by a 10 nm gap from a PEC surface as a function of wavelength.The sphere radius is 230 nm.The light intensity is 10mW/µm 2 .(c) Ex, Ez and Hy field distributions in the x-z plane for two neighbouring spheres at 1434 nm (TMD, upper panels) and 1744 nm (TED, lower panels).(d) Optical force along the z direction (black solid curve; left axis) and maximum Ez and Hy fields (blue and red dashed curves, respectively; right axis) for two spheres separately irradiated by counter-propagating incident light with π phase difference as a function of wavelength.The sphere size and light intensity is the same as in (a).The separation between the two spheres is 20 nm.Now, we examine the resulting optical forces.The black solid curve (FDTD results) and the grey dash curve (MESME results) in the Fig. 3(b) show the optical force acting on a silicon sphere near a PEC surface (gap equal to 10 nm) as a function of incident wavelength.Positive values correspond to repulsive forces.A strong repulsive optical force opposite to the laser wave vector direction emerges in the region between 1400 nm and 1600 nm. Figure 3(d) shows the photonic force acting on two spheres irradiated by counter-propagating EM beams as the mirror image method states.The beams have a ߨ phase difference between them.The photonic force acting on the upper sphere is identical to the one calculated in Fig. 3(b).It clearly proves that the strong photonic force of silicon sphere on PEC originates from the strong photonic interaction between silicon sphere and its mirror image.Red and blue dashed curves in Figs.3(b) and 3(d) show the maximum fields Hy (responsible for the repulsive force) and Ez (responsible for the attractive force), indicating that the strong repulsive force dominated by the magnetic resonance.It is very difficult separating the electric and magnetic dipole contributions because strong magnetic resonances induce strong displacement E fields.However, in order to deeper understand the optical process we have separated both the electric and magnetic field component of Maxwell tensor (blue and red dotted curves shown in Fig. 3(b)), and we have obtained similar results. .The black solid curve (FDTD results) and the grey dash curve (MESME results) in Fig. 4(a) show the optical force on the silicon sphere as a function of incident wavelength.Clearly, a strong repulsive optical force appears in the wavelength between 1408 nm and 1624 nm.At 1490 nm, the repulsive force is about five orders of magnitude larger than the weight of the silicon sphere itself.Because most optical tweezers work in liquid phase, Figs.4(c) and 4(d) show a more realistic scenario, in which the metal is gold and the spheres are suspended in water.The results are similar to vacuum or air.The repulsive force maximum appears slightly red shifted because of the high refractive index of water.However, the photonic force with water is about half the value of that appearing with vacuum or air.From the maxima of the Ez (blue dash curve in Fig. 4(a), 4(c)) and Hy (red dash curve in Fig. 4(a), 4(c)) fields, it is again clear that this strong repulsive force dominated by the magnetic resonance.For a low-refractive-index cavity (e.g., a PS sphere) of the same size as the silicon microspheres, Mie modes are much less well defined (see Fig. 2), and as expected, only very tiny photonic forces are observed (not shown).Finally, we estimate the degree of detectability of the forces under consideration calculating them as a function of the gap between the sphere and the mirror.We plot in the Fig. 5 the photonic repulsion as a function of gap distance.We use two different wavelength, ߣ = 1490 nm, and ߣ = 1580 nm, for vacuum and water medium, respectively.The vdW force acting on the sphere on surface can be approximated as [33], F = Ar/(6s 2 ), where A is the Hamaker constant, r is the sphere radius, and s is the distance between the sphere and the surface.The overall Hamaker constant is approximated as 33,34], where A 11 , A 22 and A 33 are the individual Hamaker constants for gold, silicon and the vacuum or water medium, respectively [33][34][35].From the results of Fig. 5, we see the vdW force is significantly weaker than the optical force (up to 2 orders of magnitude).Down to very small separations ~10 nm, the vdW force is more than twice smaller than the optical repulsive force for a laser intensity of 10 mW/ߤm 2 .Even at perfectly reachable lower intensity ~1 mW/ߤm 2 , photonic repulsion is still stronger than vdW in water case.Additionally, Brownian force is completely negligible at room temperature (10 fN).From Fig. 5 we deduce that the particle is subject to either repulsive or attractive forces depending on the gap distance.Therefore, particles of 460 nm near a gold mirror and illuminated with an external laser of the appropriate wavelength should undergo oscillatory motion around an equilibrium position of zero total force.With the particle suspended in water, friction force should dampen these oscillations and lead to stable levitation of the particle. Conclusion In conclusion, we have shown that the mirror image method can be extended to deal with the optical modes of a silicon microcavity excited with a laser beam and placed near a good conductor mirror, provided a proper treatment of the image is made by exciting the image cavity with the specularly reflected external light beam.The excited optical modes of the cavity strongly interact with its image beneath the metal surface, which results in attractive or repulsive forces, depending on the symmetry of the cavity modes.The sign of the force is repulsive (attractive) when the resonant mode is TM dominant (TE dominant).Remarkably, the optical force is detectable because it is much larger than the vdW force or the Brownian force.Our results have important implications for the optical manipulation of nanoparticles, which can then be used as probes to perform subwavelength microscopy.The combination of two lasers tuned to repulsive and attractive modes of the cavity, respectively, should allow exerting full control over the position of the particle by changing the ratio between the intensities of the two lasers.Additionally, the particles can be decorated with biomarkers capable of attaching to specific biomolecules.This suggests a new way of performing molecule-specific biosensing, with the advantage of a high degree of spatial control over the position of the probe.In a separate direction, a trapped particle can be regarded as a switch which interacts with probing light of a colour differing from the trapping laser, thus resulting in the control of light by light.This type of switch or modulator can reach high speeds above MHz for particles trapped in air. Fig. 1 . Fig. 1.Schematic view of the mirror image method.(a) Schematic view of the mirror image method for a single electrical charge.The direction of the force between the charge and the metal is shown in left panel.(b) Schematic view of the mirror image method for a magnetic dipole.The direction of the force on magnet is shown in left panel.(c) Schematic view of the mirror image method applied to a microcavity (grey sphere) near a metallic conductor.The thick dark red arrow indicates the incident EM wave.The thin dark red arrow indicates the (H) field direction of the incident EM wave.The upper inset shows the directions of the incident(E) and (H) fields in the x-y plane (the planar surface is at z = 0).The lower inset shows the direction of (E) and (H) fields in the image, along with the corresponding electric and magnetic image dipoles.We also show the direction of the forces acting on the optical cavity originating in electric and magnetic dipoles. Fig. 2 . Fig. 2. Scattering efficiency of a single silicon sphere, immersed in vacuum (red line) and water (blue line), as a function of light wavelength.The scattering efficiency of single PS sphere in vacuum (black dash line) is also plotted.The E and H field intensity distribution of silicon sphere in vacuum for some of the Mie modes are also shown above.The radius of both silicon and PS spheres is 230 nm. Fig. 3 . Fig. 3. Optical force on a silicon nanosphere near a PEC surface.(a) Distribution of Ex, Ezand Hy field components within the x-z plane for a silicon nanosphere near a PEC surface at wavelengths of 1434 nm (TMD, upper panels) and 1744 nm (TED, lower panels).(b) Optical force along the z direction (black solid curve obtained from FDTD, grey dash line curve obtained from MESME, red dotted curve obtained from integration only the magnetic part of the Maxwell tensor, blue dotted curve obtained from integration of the electric part of Maxwell tensor[30,31]; left axis) and maximum of the Ez and Hy fields (blue and red dashed curves, respectively; right axis) acting on a silicon sphere separated by a 10 nm gap from a PEC surface as a function of wavelength.The sphere radius is 230 nm.The light intensity is 10mW/µm 2 .(c) Ex, Ez and Hy field distributions in the x-z plane for two neighbouring spheres at 1434 nm (TMD, upper panels) and 1744 nm (TED, lower panels).(d) Optical force along the z direction (black solid curve; left axis) and maximum Ez and Hy fields (blue and red dashed curves, respectively; right axis) for two spheres separately irradiated by counter-propagating incident light with π phase difference as a function of wavelength.The sphere size and light intensity is the same as in (a).The separation between the two spheres is 20 nm. Fig. 4 . Fig. 4. Optical force on a silicon nanosphere near a real metal.(a) Optical force along the z direction (black solid curve obtained from FDTD, grey dash line curve obtained from MESME; left axis) and maximum Ez and Hy fields (blue and red dashed curves, respectively; right axis) for a silicon sphere in vacuum separated by a 10 nm gap from a gold surface as a function of wavelength.The sphere radius is 230 nm.The light intensity is 10 mW/ߤm 2 .(b) Ex, Ez and Hy field distributions in the x-z plane for a silicon sphere in vacuum at wavelengths of 1490 nm (TMD, upper panels) and 1750 nm (TED, lower panels).(c) Same as (a) for a silicon sphere suspended in water near a gold surface.(d) Ex, Ez and Hy field distributions in the x-z plane for a sphere in water near a gold surface at 1582 nm (TMD, upper panels) and 1894 nm (TED, lower panels). Figure 4 Figure 4 shows results for a real gold mirror with the microcavity suspended in either vacuum (Figs.4(a) and 4(b)) or water (Figs.4(c) and 4(d)).The black solid curve (FDTD results) and the grey dash curve (MESME results) in Fig.4(a) show the optical force on the silicon sphere as a function of incident wavelength.Clearly, a strong repulsive optical force appears in the wavelength between 1408 nm and 1624 nm.At 1490 nm, the repulsive force is about five orders of magnitude larger than the weight of the silicon sphere itself.Because most optical tweezers work in liquid phase, Figs.4(c) and 4(d) show a more realistic scenario, in which the metal is gold and the spheres are suspended in water.The results are similar to vacuum or air.The repulsive force maximum appears slightly red shifted because of the high refractive index of water.However, the photonic force with water is about half the value of that appearing with vacuum or air.From the maxima of the Ez (blue dash curve in Fig.4(a), 4(c)) and Hy (red dash curve in Fig.4(a), 4(c)) fields, it is again clear that this strong repulsive force dominated by the magnetic resonance.For a low-refractive-index cavity (e.g., a PS sphere) of the same size as the silicon microspheres, Mie modes are much less well defined (see Fig.2), and as expected, only very tiny photonic forces are observed (not shown). Fig. 5 . Fig. 5. Optical force in different media as a function of sphere-metal separation and sphere dynamic motion.(a) Optical force (red curve) along the z direction for a silicon sphere in vacuum near a gold surface as a function of silicon-gold gap distance.The wavelength of incident light is 1490 nm.The radius of the sphere is 230 nm.The light intensity is 10 mW/ߤm 2 .The van der Waals (vdW) force (black curve) is shown for comparison (see main text for details).(b) Same as (a) with silicon sphere in water near a gold surface.The laser wavelength is 1582 nm in all cases.
6,279
2012-05-07T00:00:00.000
[ "Physics" ]
ω extension formulas for 1-jets on Hilbert spaces We provide necessary and sufficient conditions for a 1-jet ( f, G ) : E → R × X to admit an extension ( F, ∇ F ) for some F ∈ C 1 ,ω ( X ). Here E stands for an arbitrary subset of a Hilbert space X and ω is a modulus of continuity. As a corollary We provide necessary and sufficient conditions for a 1-jet (f, G) : E → R × X to admit an extension (F, ∇F ) for some F ∈ C 1,ω (X). Here E stands for an arbitrary subset of a Hilbert space X and ω is a modulus of continuity. As a corollary, in the particular case X = R n , we obtain an extension (nonlinear) operator whose norm does not depend on the dimension n. Furthermore, we construct extensions (F, ∇F ) in such a way that: (1) the (nonlinear) operator (f, G) → (F, ∇F ) is bounded with respect to a natural seminorm arising from the constants in the given condition for extension (and the bounds we obtain are almost sharp); (2) F is given by an explicit formula; (3) (F, ∇F ) depend continuously on the given data (f, G); (4) if f is bounded (resp. if G is bounded) then so is F (resp. F is Lipschitz). We also provide similar results on superreflexive Banach spaces. Introduction and main results Throughout this paper we will assume that ω : [0, +∞) → [0, +∞) is a concave and increasing function such that ω(0) = 0 and lim t→+∞ ω(t) = +∞. Also, we will denote ϕ(t) = t 0 ω(s)ds (1.1) for every t ≥ 0, and if X is a Banach space then C 1,ω (X) will stand for the set of all functions g : X → R which are Fréchet differentiable and such that Dg : X → X * is uniformly continuous, with modulus of continuity ω, that is to say, there exists some constant C > 0 such that for all x, y ∈ X. Here · * denotes the usual norm of the dual space X * , defined by ξ * = sup{ξ(x) : x ∈ X, x ≤ 1} for every ξ ∈ X * . If E is a subset of R n and we are given functions f : E → R, G : E → R n , Glaeser's C 1,ω version of the classical Whitney extension theorem (see [43,22]) tells us that there exists a function F ∈ C 1,ω (R n ) with ( for all x, y ∈ E. We can trivially extend (f, G) to the closure E of E so that the inequalities (1.2) hold on E with the same constant M , and the function F can be explicitly defined by (1.6) which will be shortened to A(f, G) whenever the subset E is understood. In particular, for a differentiable function F : X → R, we let A(F, ∇F ) stand for A(F, ∇F ; X). As we said, if we construct such an F by means of the Whitney Extension Operator (1.3), then we necessarily have lim n→∞ k(n) = ∞ for all possible choices of k(n). Nevertheless, in the case ω(t) = t (which gives raise to the important class of C 1,1 functions), J.C. Wells [42] and other authors [30,10,4] showed, by very different means, that the C 1,1 version of the Whitney extension theorem holds true if we replace R n with any Hilbert space and, moreover, there is a (nonlinear) extension operator (f, G) → (F, ∇F ) which is minimal, in the following sense. Given a Hilbert space X, with norm denoted by | · |, a subset E of X, and functions f : E → R, G : E → X, a necessary and sufficient condition for the 1-jet (f, G) to have a C 1,1 extension (F, ∇F ) to the whole space X is that (1.7) Moreover, the extension (F, ∇F ) can be taken with best Lipschitz constants, in the sense that is the C 1,1 trace seminorm of the jet (f, G) on E. In particular, considering X = R n we deduce the remarkable corollary that in the case ω(t) = t one can take k(n) = 1 for all n in Theorem 1.1. Let us point out that condition (1.7) appears in Le Gruyer's paper [30]. Wells' Theorem was stated and proved in [42] with the following equivalent condition: there exists a number M > 0 such that for all y, z ∈ E. That this condition is equivalent to (1.7) can be easily checked as follows: for each M > 0 consider the quadratic function and find the point x M ∈ X that minimizes V M . Then we have A(f, G) ≤ M < ∞ if and only if V M (x M ) ≥ 0, which after a straightforward computation is easily seen to be equal to condition (W 1,1 ). We should also mention that Wells's proof [42] was rather elaborate (and constructive only in the case of a finite set E), and that Le Gruyer's proof [30], though very elegant and simple, was not constructive either (Zorn's lemma was used in an essential part of the argument). Very recently, the papers [4,10] supplied constructive proofs of Wells' theorem by means of two different explicit formulas, and also provided new proofs (with explicit formulas) for a related C 1,1 convex extension problem for 1-jets that had been previously considered in [2]; see also [3] for the C 1 convex case. In this paper we will consider the following questions: is Theorem 1.1 true if we replace R n with a Hilbert space X? Or equivalently, is there a version of Wells's theorem for not necessarily linear moduli of continuity ω? In particular, is Theorem 1.1 true with bounded k(n)? And what can be said about other Banach spaces X? Let us mention that, as was shown in [26], a similar question for the class C 1 (X) has a positive answer, but to the best of our knowledge nothing is known for nonlinear ω and the class C 1,ω (X), where X is a Hilbert space (or more generally a Banach space). It is also important to notice that for the classes C k,ω (X) with k ≥ 2 this kind of results is not true: the best possible constants k(n) in the higher order versions of Theorem 1.1 established in [22] must go to ∞ as n → ∞; see [42, Theorem 1 of Section 5]. As we will see, the main result of our paper gives a positive answer to the first question: a jet (f, G) defined on an arbitrary subset E of a Hilbert space X has an extension (F, ∇F ) with F ∈ C 1,ω (X) if and only if A(f, G) < ∞. Moreover, we can take F such that In particular, considering X = R n , this shows that in Theorem 1.1 one can always take k(n) ≤ 2 for all n ∈ N. We will also prove similar results for superreflexive Banach spaces X for a certain class of moduli of continuity. In order to state and explain our results more precisely, let us introduce some more notation and definitions. Recall that, given a function g : R → R, the Fenchel conjugate of g is defined by where g * may take the value +∞ at some t. If (X, · ) is a Banach space, with dual (X * , · * ), for any ξ ∈ X * we let ξ, v := ξ(v) denote the duality product. Definition 1.2. We will say that a 1-jet (f, G) defined on a subset E of a Banach space X satisfies condition (W 1,ω ) with constant M > 0 on E provided that On the other hand, for any function F ∈ C 1,ω (X), the jet (F, ∇F ) satisfies (W 1,ω ) with constant M = M ω (∇F ); see Proposition 3.1(2) below for a proof. Consequently, if M * denotes the infimum of those numbers M > 0 for which (f, G) satisfies (W 1,ω ) with constant M , Theorem 1.3 yields the following estimate We may obtain slightly better constants in the estimate of the gradient if we consider the following extension condition. Definition 1.4. We will say that a 1-jet (f, G) defined on a subset E of a Banach space X satisfies condition (mg 1,ω ) with constant M on E provided that for all y, z ∈ E and all x ∈ X. Thus (f, G) satisfies (mg 1,ω ) for some M > 0 if and only if A(f, G) < ∞, and A(f, G) is precisely the smallest M for which (f, G) satisfies (mg 1,ω ) with constant M . Condition (mg 1,ω ) is half-intrinsic and half-extrinsic (in what refers to points x ∈ X), as opposed to (W 1,ω ), which is completely intrinsic (it only concerns points y, z ∈ E). In principle condition (W 1,ω ) should be easier to check, but conditions like (mg 1,ω ) may also appear very naturally in some applications (see, for instance, the paper [1] in the convex setting). Anyhow both conditions are useful and in fact they are equivalent up to an absolute factor; see Proposition 3.1 below. In the case of a nonlinear modulus of continuity ω, these conditions, though equivalent, are no longer identical. This is due to the fact that the minimization of the function leads us in this case to rather perplexing equations which are difficult to handle and solve. Therefore a condition of the type V M (x M ) ≥ 0 would be much more complicated than (W 1,ω ). With this extrinsic condition we have the following. Theorem 1.5. Let E be a nonempty subset of a Hilbert space X, and f : The proof of the preceding theorem also gives us the following nearly optimal result. Note that in the particular case that α = 1 this result yields Wells' theorem. According to Theorem 1.5 we always have (1.11) and, in the special case that ω(t) = t α , we will see that this estimate can be improved as follows: On the other hand, for any extension (H, ∇H) of (f, G) with H ∈ C 1,ω (X) we always have the trivial estimate Hence we may conclude the following. and A(f, G) is defined by (1.6). It should be noted that for every function of class F ∈ C 1,1 (X) defined on a Hilbert space, we always have the identity Lip(∇F ) = A(F, ∇F ), but this is no longer true for the class C 1,ω . For instance, it is easy to see that the function f (x) This supremum is attained at couples of points (x, y) with x < 0 < y, and, using the homogeneity of f and f , it is not difficult to check that it is equal to sup t>0 t 3/2 + 3t 1/2 + 2 2(t + 1) 3/2 ≤ 1, 3066. We will also prove that Theorem 1.5 extends to the class of superreflexive spaces: if X is such a Banach space, thanks to Pisier's results (see [34,Theorem 3.1]), we can find an equivalent norm · in X such that may assume that the norm · is uniformly smooth with modulus of smoothness of power type p = 1 + α for some 0 < α ≤ 1. Hence there exists a constant C > 0, depending only on this norm, such that for all x, y ∈ X, λ ∈ [0, 1]. In particular, we have (1.14) We will consider modulus of continuity ω such that the function t → t α /ω(t) is nondecreasing, which includes the cases ω(t) = t β , with β ≤ α. We will then show that an inequality similar to (1.13) holds true with ψ ω = ϕ ω • · instead of · 1+α , where ϕ ω (t) = t 0 ω(s)ds. As a consequence, we will obtain the following theorem in terms of conditions (mg 1,ω ). Theorem 1.9. Let X be a superreflexive Banach space with an equivalent norm · satisfying (1.13) and let ω be a modulus of continuity such that t → t α /ω(t) is nondecreasing. Let E ⊂ X be a nonempty subset and f : E → R, G : E → X * two functions. There exists F ∈ C 1,ω (X) such that (F, DF ) = (f, G) on E if and only (f, G) satisfies (mg 1,ω ) for some M > 0. Moreover, we can arrange that M ω (DF ) ≤ 3 1 + 3 1+α 1+α C M . And if we consider the intrinsic conditions (W 1,ω ) we have the following. It is worth noting that the proofs of Theorems 1.9 and 1.10 show that the sufficiency parts of these results still hold true for moduli ω not necessarily satisfying that the function t → t α /ω(t) is non-decreasing, if we only assume that the function ψ ω := ϕ ω • · is of class C 1,ω . However, such an assumption implies superreflexivity of the space X (see [11,Theorem V.3.2]), hence also the existence of an equivalent norm with modulus of smoothness of power type p = 1 + α for some α ∈ (0, 1]. Let us also mention that in [4,Section 6] it was shown that a necessary condition on a Banach space X for the validity of a Whitney-type extension theorem in X for some class C 1,ω is that X is superreflexive. Let us finish this introduction by making a few comments on our method of proof and honoring the title of this paper (where we promised some formulas). If one tries to adapt the proof of Wells' theorem given in [4] to the C 1,ω situation, one sees that the argument breaks down for the following reason: when ω(t) is not linear, it is no longer true that a function u is of class C 1,ω if and only if there exists a convex function ψ of class C 1,ω such that u +ψ is convex and u −ψ is concave. As it turns out, the appropriate class of functions for tackling this more general problem seems to be not that of convex functions, but that of strongly ϕ-paraconvex functions, see Definition 2.5 below. The main ideas of the proof of Theorem 1.6 are the following: if A(f, G) < ∞ then the functions are well defined and satisfy m(x) ≤ g(x) for all x ∈ X, and m(y) = g(y) = f (y) for all y ∈ E. Then one can check that the functions m and (−g) are strongly 2Mϕ-paraconvex and define F : (1.15) One may call F the 2Mϕ-strongly paraconvex envelope of g. As we will show, both F and −F are strongly 2Mϕ-paraconvex, and this implies that It is also worth noting that, in the very particular case ω(t) = t, one can also define F above with 2 replaced with 1. In this special case, another expression for F is the following: for each t ∈ R, p ∈ X, ξ ∈ X * , set Then we have (1.17) see Lemma 2.9 below. From this formula we can see that, in the case that E is finite, say that E has m points, then for each x ∈ X, F (x) can be computed by solving a maximization problem in R × X × X * with m constraints, where the function to be maximized and the constraining functions are linear combinations of bilinear functions and quadratic functions. Hence the computation of F (x) is much easier than in the general case of a nonlinear modulus ω. When ω(t) is not necessarily linear, we may also provide an alternate formula for an admissible extension F of (f, G) as the supremum of a smaller family of functions than that of (1.15): given a 1-jet (f, In the case that ω(t) is linear, it is easily seen that this extension F coincides with (1.17), and also with conv(g + ψ) − ψ; where ψ = M 2 | · | 2 and conv(g + ψ) denotes the convex envelope of g + ψ, that is, the supremum of all lower semicontinuous convex functions lying below g + ψ. This is a consequence of the fact that a function h : X → R is strongly ϕ-paraconvex if and only if h + ψ is convex; where ϕ(t) = M 2 t 2 (however, this is no longer true for nonlinear moduli of continuity). These results will all be shown in Section 3 below. In Sections 4, 5 we will give some variants of our techniques which will allow us to establish similar results for the subclasses of C 1,ω (X) consisting of bounded and/or Lipschitz functions, and also a certain continuous dependence of the extensions on the initial data, meaning that if a sequence {(f n , G n )} n∈N of jets converges uniformly on E to a jet (f, G) then the corresponding extensions satisfy that lim n→∞ (F n , ∇F n ) = (F, ∇F ) uniformly on X. Finally, in Section 6 we will consider the class C 1,u B (X) of differentiable functions whose derivatives are uniformly continuous on bounded subsets of X, and we will show the following result: suppose that the jet (f, G) is bounded on each bounded subset of E; then there exists Also note that, in the particular case X = R n , we have C 1 (R n ) = C 1,u B (R n ), and this statement is thus equivalent to Whitney's extension theorem for C 1 . Some technical tools Recall that the Fenchel conjugate of a function g is denoted by g * and defined as in (1.10). Proposition 2.1. The following properties hold. Here, for a function ψ : Abusing terminology, we will consider the Fenchel conjugate of nonnegative functions only defined on [0, +∞), say δ : [0, +∞) → [0, +∞). In order to avoid problems, we will assume that all the functions involved are extended to all of R by setting δ(t) = δ(−t) for t < 0. Hence δ will be an even function on R and therefore In the following proposition we collect some elementary facts concerning the functions ω, ω −1 , ϕ and ϕ * . (1) ϕ is convex; If, in addition, ω is increasing and lim t→∞ ω(t) = ∞, then ω −1 and ϕ * are well defined and be a Hilbert space, and ω a modulus of continuity as in the preceding proposition. Then the function ψ(x) = ϕ(|x|), x ∈ X, satisfies the following inequality: Using the duality theorem (see [44,Proposition 3.5.3], for instance), we obtain that ψ = (ψ * ) * is uniformly smooth with modulus of smoothness δ * , that is, For Hölder moduli of continuity, the preceding lemma is true with constant 2 1−α instead of 2. Lemma 2.4. Let (X, | ·|) be a Hilbert space, and ω(t) = t α for α ∈ (0, 1]. Then the function ψ(x) = ϕ(|x|), x ∈ X, satisfies the following inequality: We know from [41] that ψ * is uniformly convex with modulus of convexity Thanks to Proposition 2.2 we have By using the duality theorem as in Lemma 2.3, we obtain the desired inequality. Definition 2.5. If C ≥ 0 is a constant, we will say that a function u is strongly Cϕparaconvex on a Banach space X if we have for all x, y ∈ X and all t ∈ [0, 1]. Thus the preceding two lemmas can be restated by saying that −ψ is strongly Cϕparaconvex for some C > 0. On the other hand, since ψ is also convex, ψ is trivially strongly ϕ-paraconvex. Some authors call such functions u semiconvex, or ϕ-semiconvex, but we prefer not to use this terminology because it may make the reader think that the function u +Cϕ (| · |) will be convex, at least locally for some large C, which is generally false unless ω is linear. See [8,27,35,36] and the references therein for background on paraconvex and strongly ϕ-paraconvex functions. Next we recall a well-known fact about this kind of functions which we will have to use in our proofs. This result is usually shown in more specialized settings with the help of subdifferentials or Clarke's generalized gradients. For the reader's convenience (and also because we need precise estimates and the literature's terminology varies depending on authors), we include a self-contained elementary proof of this result. Proposition 2.6. Let (X, · ) be a Banach space, ω a modulus of continuity, ϕ(t) = t 0 ω(s)ds, and u : X → R be a continuous function. Assume that both u and −u are strongly Cϕ-paraconvex. Then u is everywhere Fréchet differentiable, and, with the notation of (1.6), A(u, Du) ≤ C. In particular u is of class C 1,ω (X) with for all x, y ∈ X. Moreover, if X is a Hilbert space, we have Proof. Taking y = a and h = x − a in (2.1) we see that u satisfies and since −u is strongly Cϕ-paraconvex too, we obtain For the moment, let us fix a and h in X, and consider s, t ∈ (0, 1]. The inequality (2.2) implies Similarly, because −u is also strongly Cϕ-paraconvex, we have for all s, t ∈ (0, 1], a, h ∈ X. This entails the existence and local uniform boundedness of the limit for a, v ∈ X. Indeed, on the one hand, by taking s = 1 and using that u is locally bounded we see that there is some r > 0 and a constant k r such that On the other hand, if the limit in (2.5) did not exist then there would be some ε > 0 and two sequences (s n ), (r n ) of strictly positive numbers converging to 0 such that for all n. Up to extracting subsequences we may assume that 0 < r n < s n for all n, and then find (t n ) ⊂ (0, 1] such that r n = t n s n for every n, so that the above inequality reads in contradiction with (2.4) and the fact that Next, by using (2.3) and (2.7) we also get exists and equals D v u(a). Furthermore, by letting t go to 0 in (2.4) we also have for every a, v ∈ X, s ∈ (0, 1], and in particular for all a, v ∈ X. In order to finish the proof that u is differentiable, we will now combine some calculations from [8, Theorem 3.3.7] and [27, Theorem 6.1]. We do not yet know that the function v → D v u(a) is linear, but we do easily get that D λv u(a) = λD v u(a) for all a ∈ X and λ ∈ R; this fact is a straightforward consequence of (2.8) which we will use before establishing the linearity of v → D v u(a). We next show that for all a, b ∈ X. Indeed, writing b = a + h with h = 0, and using the strong ϕparaconvexity of u and −u, and the fact that Observe also that sup v ≤1 |D v u(a)| is finite for every a, thanks to (2.6). Now we may from which we easily deduce that D v+w u(a) = D v u(a) + D w u(a). We thus have that u is everywhere Fréchet differentiable, and from (2.9) we obtain that the jet (u, Du) : X → R × X * satisfies A(u, Du) ≤ C. The estimations for the modulus of continuity of Du are a consequence of Proposition 3.1(3) below. Let us finish this section by studying what one could fairly call the Cϕ-paraconvex envelope of a function. Definition 2.7. Given a Hilbert space X, a continuous function g : X → R, and a number C > 0, let us define Proof. Indeed, since h is strongly Cϕ-paraconvex, it is locally Lipschitz (see [27, Proposition 6.1] for a proof of this fact), and then the Clarke subdifferential ∂ C h(x) is nonempty for every x ∈ X. Moreover, according to [27, p. 219], the Clarke subdifferential of h can be written as ,δ) is Lipschitz. Using that h is strongly Cϕ-paraconvex we can prove that, in fact, we have the formula Letting t → 0 + and taking into account that ϕ(t) ≤ tω(t) and lim t→0 + ω(t) = 0, we get We have thus shown (2.10). Since h is locally Lipschitz we have ∂ C h(x) = ∅ for every x ∈ X and the result follows. Lemma 2.9. Assume that ω(t) = at, where a > 0. Then we have, for every x ∈ X, Proof. Let us call On the one hand, by using Lemma 2.4 with α = 1, we have that H p,t,ξ is strongly Cϕ-paraconvex, hence it is clear that On the other hand, if h is strongly Cϕ-paraconvex and h ≤ g then, according to the previous lemma, there exists some ξ ∈ X * such that for all y ∈ X. Because y → H x,h(x),ξ (y) is strongly Cϕ-paraconvex and lies below g, we have, by definition of H, Therefore H ≥ h for every h that is strongly Cϕ-paraconvex and lies below g. Since W C is the supremum of all such h, we also have for all x ∈ X. Thus we conclude H = W C . Proofs of the main results Let us start by showing the equivalence between conditions (mg 1,ω ) and (W 1,ω ) and their relation with the quantity M ω (G) for jets (f, G) defined on subsets of Banach and Hilbert spaces. (1) Assume that (f, G) satisfies condition (W 1,ω ) with constant M > 0. Then we have and, in particular, M ω (G) ≤ 3M . Moreover, if X is a Hilbert space, then Furthermore, if X is a Hilbert space and ω(t) = t α , α ∈ (0, 1], we have M ω (G) ≤ Proof. (3) Let y, z ∈ E and v ∈ X. For the point x = 1 2 (y + z) + v, condition (mg 1,ω ) gives Reversing the roles of y and z, and taking x = 1 2 (y + z) − v in condition (mg 1,ω ) we obtain By summing both inequalities we have (3.4) These estimates hold for any v ∈ X, and in particular for every v ∈ X with v = y − z . Then, using that ϕ is convex, we conclude Let us now assume that X is a Hilbert space. Note that the function [0, , which is nonincreasing because so is s → ω(s)/s. Using the concavity of ϕ( √ ·) in (3.4) we obtain Writing |v| = t|y − z| with t > 0 and using Proposition 2.2 we deduce Taking t = √ 15/2 and t = √ 3/2 in (3.5), the desired estimate follows immediately. Finally, assume that X is a Hilbert space and ω(t) = t α for α ∈ (0, 1]. From (3.5) we derive where |v| = t|y −z| for t > 0. It is straightforward to see that the function 0 < t → h(t) = 2(t 2 +1/4) The desired estimate easily follows. (4) Given y, z ∈ E, we have By summing both inequalities we easily get Applying Jensen's inequality on both sides of the previous inequality (bearing in mind that ω −1 is convex and ω is concave) we obtain Now we show Theorems 1.3, 1.5, 1.6, 1.7, 1.9, 1.10, and Corollary 1.8. Part of the proof of these results, as those of Sections 4 and 5 below, will be deduced from the following technical theorem. (1) ψ − y and −ψ + y are strongly Cϕ-paraconvex for each y ∈ E; for all x ∈ X and all y, z ∈ E. Let us define functions m, g, F : X → R by and F (x) := sup{h(x) : h ≤ g, h is strongly Cϕ-paraconvex}. Proof. Condition (3) is obviously equivalent to saying that m and g are finite everywhere and satisfy m ≥ g on X. Also, if y ∈ E, it is obvious that m(y) ≥ f (y) and g(y) ≤ f (y), which implies m(y) = g(y) = f (y). Thus we have for all x ∈ X, and m(y) = g(y) = f (y) for all y ∈ E. Proof. That m and −g satisfy the lemma follows from the elementary observation that the supremum of a family of strongly Cϕ-paraconvex functions is strongly Cϕparaconvex. Once we know that m is strongly Cϕ-paraconvex, since m ≤ g on X, we deduce that F is well defined, with m ≤ F ≤ g on X. According to (3.10), this implies F = f on E. Finally, applying the mentioned observation again, we obtain that F is strongly Cϕ-paraconvex as well. Proof. Fix x, y ∈ X and λ ∈ [0, 1] and define the function Using that F is strongly Cϕ-paraconvex it is straightforward to check that h is strongly Cϕ-paraconvex as well. Also, since F ≤ g, we have that for all z ∈ X; where the last inequality follows from the fact that −g is strongly Cϕ-paraconvex; see Lemma 3.3. We have thus shown that h −λ(1 −λ)Cϕ( x −y ) is strongly Cϕ-paraconvex and less than or equal to g. By the definition of F , we must have h −λ(1 −λ)Cϕ( x −y ) ≤ F . In particular, This proves the lemma. Proof. We already know that both F and −F are strongly Cϕ-paraconvex. Then by Proposition 2.6 we have that F is of class C 1,ω (X), with for all x, y ∈ X, and also that Finally, let us check that DF = G on E. By the definitions of m and g and the fact that m ≤ F ≤ g on X we have, for every y ∈ E and x ∈ X, that is, ψ − y ≤ F ≤ ψ + y on X, and since by condition (2) we also know that ψ ± y is differentiable at y, with Dψ ± y (y) = G(y) and ψ ± y (y) = f (y) = F (y), we conclude that DF (y) = G(y). The proof of Theorem 3.2 is complete. Proofs of Theorems 1.3, 1.5 and 1.6. Let us first note that in the case that A(f, G) = 0 our results are trivial. Indeed, if A(f, G) = 0, we may fix a point z 0 ∈ E and we have f (y) + G(y), x −y = f (z 0 ) + G(z 0 ), x −z 0 for all y ∈ E, x ∈ X; then the affine function On the other hand, it is clear that A(f, G) is the infimum of all constants M > 0 for which (mg 1,ω ) holds. In particular, A(f, G) = 0 if and only if (f, G) satisfies condition (mg 1,ω ) with all M > 0. According to these observations, in our proofs we may assume that A(f, G) > 0. Also note that if A(f, G) is finite and strictly positive then (f, G) satisfies condition (mg 1,ω ) with M = A(f, G). Let us start with the proof of Theorem 1.6. To prove the necessity of (mg 1,ω ), which is obviously equivalent to A(f, G) < ∞, we just use Taylor's theorem: for all x, y, z ∈ X, from which (mg 1,ω ) follows immediately (in fact this shows that Let us now show the sufficiency of condition (mg 1,ω ). Assume that (f, G) satisfies (mg 1,ω ) with constant M := A(f, G) > 0. For all y, z ∈ E, define the functions Condition (mg 1,ω ) tells us that ψ − z (x) ≤ ψ + y (x) for all x ∈ X, y, z ∈ E, so the functions ψ ± y meet condition (3) of Theorem 3.2, and it is obvious from the definition that they also satisfy condition (2). By Lemma 2.3 we have that x → −Mϕ(|x − z|) is strongly 2Mϕ-paraconvex, which immediately implies that the function is of class C 1,ω (X), where Theorem 1.5 is an immediate consequence of Theorem 1.6 and Proposition 3.1(3). Finally, in order to prove Theorem 1.3 we slightly modify the proof of Theorem 1.6. Assume that the jet (f, G) satisfies condition (W 1,ω ) with constant M > 0 on a subset E of a Hilbert space X. Defining ϕ = ϕ(2·) and ψ ± y (x) := f (y) + G(y), x −y ±M ϕ(|x −y|) for every x ∈ X, y ∈ E, we see from the arguments in the proof of Theorem 1.6 together with Proposition 3.1(1) that the families of functions {ψ ± y } y∈E satisfy all the assumptions of Theorem 3.2 for C = 2M and with ϕ in place of ϕ. Thus if F is the function defined in (3.8) (with ϕ in place of ϕ), then both F and −F are strongly 2M ϕ-paraconvex with F = f and ∇F = G on E. Proposition 2.6 tells us that F ∈ C 1, ω (X) with ω = 2ω(2·) and Proofs of Theorem 1.7 and Corollary 1.8. In the preceding proof we may use Lemma 2.4 instead of Lemma 2.3 to obtain that m and −g are strongly 2 1−α Mϕ-paraconvex, and the rest of the proof goes through just replacing 2M with 2 1−α M at the appropriate points, yielding Theorem 1.7. On the other hand Corollary 1.8 is an obvious consequence of previous results and some remarks made in the introduction, together with the following observation. If α ∈ (0, 1], and ω(t) = t α , then one can combine Lemma 2.4 and Proposition 2.6 to improve the estimates of A(F, ∇F ) in Theorem 1.6 and of the trace seminorm (f, G) E,ω in (1.11) as follows: and Proofs of Theorem 1.9 and 1.10. We start with the proof of Theorem 1.9. Assume that X is a superreflexive space with an equivalent norm · satisfying (1.13) for some α ∈ (0, 1] and C > 0. Let ω be a modulus of continuity such that t → t α /ω(t) is non-decreasing. Finally, in order to prove the desired inequality, let λ ∈ [0, 1] and x, y ∈ X. We can easily write Let us define m and g by where now ϕ(t) := t 0 ω, and ψ(x) := ϕ( x ). Bearing in mind Lemma 3.6 we see that the function where C * is as in Lemma 3.6. That is to say, −p is strongly C * Mϕ-paraconvex. Then, we define F (x) := sup{h(x) : h ≤ g and h is strongly C * Mϕ-paraconvex}, x ∈ X, and exactly as in the proof of Theorem 1.5 one may use Theorem 3.2 to show that F and −F strongly C * Mϕ-paraconvex, which by Proposition 2.6 implies that F is of class C 1,ω (X), with the following estimate: for every x, y ∈ X. As in that proof, we also have m ≤ F ≤ g on X, m = f = g = F on E, and DF = G on E. Finally, Theorem 1.10 is a consequence of Theorem 1.9 and Proposition 3.1. Indeed, assuming that (f, G) satisfies condition (W 1,ω ) on E, the functions F and −F are strongly C * M ϕ-paraconvex, with ϕ = ϕ(2·). Bearing in mind that ϕ(t) = t 0 ω; where ω = 2ω(2·), the estimate in Proposition 2.6 yields DF ( Let us conclude this section with a proof that the alternate formula (1.18) also provides an admissible extension F in Theorem 1.6. X, and a 1-jet (f, G) : Then F is of class C 1,ω (X) and satisfies (F, Proof. By replacing ω(t) with Mω(t) if necessary, we may assume without loss of generality that M = 1. Let us observe that: where a, ξ, λ i , p i are as in the definition of F. Then, for every x, y ∈ X and t ∈ [0, 1], we can write where we have used that −ϕ • | · | is strongly 2ϕ-paraconvex. • F is well defined and satisfies F ≤ g. Indeed, since m ≤ g and any function x → f (y) + G(y), x − y − ϕ(|x − y|) belongs to F, we have that F is well defined and m ≤ F ≤ g on X. We then deduce that F = f on E and This shows that F is differentiable on E, with ∇F = G on E. • F is strongly 2ϕ-paraconvex. This is a consequence of the general and obvious fact that the supremum of a family of strongly Cϕ-paraconvex functions is also strongly Cϕ-paraconvex. In order to show that F ∈ C 1,ω (X), let us also note the following. • The function −F is strongly 2ϕ-paraconvex. Indeed, let x, y ∈ X, t ∈ [0, 1] and ε > 0. We can find h 1 , h 2 ∈ F with h i ≤ g and F (x) ≤ h 1 (x) + ε, F (y) ≤ h 2 (y) + ε. Define It is straightforward to see that h ∈ F. We have that where we have used the fact that −g is strongly 2ϕ-paraconvex. This shows that Letting ε go to 0 we thus obtain that −F is strongly 2ϕ-paraconvex. Now we can apply Proposition 2.6 to conclude that F ∈ C 1,ω (X) and A(F, ∇F ) ≤ 2 = 2A(f, G). The bounded case If a jet (f, G) defined on a subset X of a Hilbert space X satisfies A(f, G) < ∞ then we already know that there exists F ∈ C 1,ω (X) such that (F, ∇F ) = (f, G) on E. If the given functions f, G are bounded on E, then it is natural to ask whether (F, ∇F ) can be taken to be bounded. The extensions F defined by (1.15) may not be bounded (in fact they are never bounded when E is bounded), but in this section we will see how we can modify the proof of Theorems 1.3 and 1.5 so as to get (F, ∇F ) bounded. Also, with a different modification of the proof, we can obtain a certain continuous dependence of the extensions on the initial data, meaning that if a sequence {(f n , G n )} n∈N of jets converges uniformly on E to a jet (f, G) then the corresponding extensions satisfy that lim n→∞ (F n , ∇F n ) = (F, ∇F ) uniformly on X. In order to formulate our results more precisely, let us introduce some more notation. Let us denote and endow this vector space with the norm which makes C 1,ω b (X) a Banach space. Also observe that the mapping (f, G) → A(f, G) is a seminorm on the vector space of 1-jets and therefore defines a norm on J 1,ω b (E). Theorem 4.1. Let X be a Hilbert space. There exist a nonlinear operator E : and a constant C > 0, only depending on ω, with the following properties: Proof. Given a jet (f, G) ∈ J 1,ω b (E), let us define E(f, G) as follows. For the number On the other hand, if |x − y| < 1 then In either case we have for all x ∈ X, y ∈ E, and similarly we see that for all x ∈ X, z ∈ E. Let us define, for each y ∈ E, the functions By using (4.1) and (4.2) and the assumption that A(f, G) ≤ M < ∞, it is immediately checked that these functions satisfy conditions (2) and (3) of Theorem 3.2. Besides, recalling Lemma 2.3 and the fact that the maximum of two strongly 2Mϕ-paraconvex functions is strongly 2Mϕ-paraconvex, we also have that m is strongly 2Mϕ-paraconvex. Similarly, −g is strongly 2Mϕ-paraconvex too. Then we can apply Theorem 3.2 with C = 2M , obtaining that F and −F are strongly 2Mϕ-paraconvex, hence F ∈ C 1,ω (X), and that (F, Since m ≤ F ≤ g, it is obvious that we also have Let us now estimate ∇F ∞ . By using (2.4) with s = 1 in the proof of Proposition 2.6, and recalling that both F and −F are strongly 2Mϕ-paraconvex, we have and by setting |h| = 1 and letting t → 0 + we obtain In conclusion, by combining (4.5), (4.6) and (4.7) we obtain that where C > 0 is a constant only depending on ω. with the following properties: That the mapping (f, G) → E(f, G) satisfies properties (1) and (2) of the statement can be checked exactly as in the proof of the previous theorem. In order to prove (3) we need to localize the infimum defining the associated functions g and Lemma 4.3. We have that for all x ∈ X, n ∈ N. Proof. If x ∈ X, y ∈ E and |x − y| ≥ 1 then Therefore Obviously the same holds true of g n . Lemma 4.4. (g n ) converges to g uniformly on X. Proof. Let ε > 0, and choose n 0 ∈ N large enough so that Then, given x ∈ X, we either have g( In the first case we have for all n ≥ n 0 . In the second case, thanks to (4.8) we may find y x ∈ E ∩ B(x, 1) such that hence, for all n ≥ n 0 , In either case we see that if n ≥ n 0 then for all x ∈ X. Similarly one can check that for all x ∈ X, n ≥ n 0 . Thus we conclude that g n − g ∞ ≤ ε for all n ≥ n 0 . Proof. Observe that the family of functions {g, g n , F, F n } n is uniformly bounded thanks to property (2) and the fact that {(f n , G n )} n converges uniformly to (f, G). Together with Lemma 4.4, this implies that, given ε > 0, we can choose n 0 ∈ N so that, for every n ≥ n 0 (4.10) In particular, (4.9) implies that MM −1 n g n ≤ g + MM −1 n − 1 g n ∞ + ε/6 ≤ g + ε/3 for each n ≥ n 0 . (4.11) Observing that a function h is strongly aϕ-paraconvex if and only if ba −1 h + c is strongly bϕ-paraconvex, where a, b > 0 and c ∈ R are any constants, the inequalities in (4.9) and (4.11) yield, for every x ∈ X and n ≥ n 0 , Similarly (using the inequalities of (4.10)), we obtain for all x ∈ X, n ≥ n 0 . Therefore F n − F ∞ ≤ ε for all n ≥ n 0 . It only remains to be shown that lim n→∞ ∇F n − ∇F ∞ = 0. This is a consequence of the following fact (which is of course well known; we include a short proof here for the reader's convenience). Lemma 4.6. Let u : X → R be differentiable, and let (u k ) be a sequence of differentiable functions such that u k converges to u uniformly on X, and such that for some constant for all k ∈ N and all x, y ∈ X. Then ∇u k − ∇u ∞ converges to 0 uniformly on X. Proof. By substracting the second inequality from the first one we get Given ε > 0 we may choose k 0 ∈ N so that u k − u ∞ ≤ ε 2 /4 for all k ≥ k 0 . By taking h ∈ X with |h| = ε and y = x + h in (4.12) we obtain It is clear that property (2) together with the fact that {(f n , G n )} n converges uniformly to (f, G) implies that max{M ω (∇F n ), M ω (∇F )} ≤ A * C, for n large enough and for a constant A * > 0 comparable to f ∞ + G ∞ + A. Combining Lemma 4.5 and this observation, we can apply Lemma 4.6 for (F n ) n and F to conclude lim n→∞ ∇F n = ∇F uniformly on X. The proof of Theorem 4.2 is complete. (1) Theorems 4.1 and 4.2 have analogues for superreflexive spaces X and the classes C 1,ω (X), assuming that t α /ω(t) is a nondecreasing function. We let the reader formulate them. The proofs are the same, with obvious changes. (2) It would be interesting to know whether one can improve Theorems 1.5 and 4.1 to find an extension operator with the additional property that The Lipschitz case In this section we will show a variant of our main result in which we are given (f, G) with G bounded but f unbounded, and we want an extension F with ∇F bounded. Let us consider the function ψ := ϕ • | · |, and check that − ψ is strongly C ϕ-paraconvex for some absolute constant C > 0. Indeed, if ω is defined as ω = ω on [0, 1] and ω = ω(1) on [1, +∞), observe that for all u, v ∈ X such that |u|, |v| ≥ 1, we have that If one of the vectors, say u, is inside the unit ball and the other is not, then the line segment [u, v] intersects the unit sphere at a unique point z, and we have In any case we see that Now, given x, y ∈ X and λ ∈ [0, 1], we can use (5.1) to obtain This proves that − ψ is strongly C ϕ-paraconvex, with C = 2A + 4. This C is not to be confused with that of the statement of the theorem. Now let M := A(f, G) < ∞ and let y, z ∈ E, x ∈ X. Observe that if |x −y|, |x −z| ≤ 1, then On the other hand if |x − y| > 1 or |x − z| > 1, using that f is Lipschitz, G is bounded, the fact that t ≤ ϕ(1) −1 ϕ(t) for every t ≥ 1, and finally the convexity of ϕ, we can write We conclude that The preceding observations show that the family of functions satisfy conditions (1), (2) and (3) of Theorem 3.2. In addition, since ( ϕ) = ω ≤ ω(1), the function m := sup y∈E ψ − y is Lipschitz with Lip(m) ≤ G ∞ + ω(1) M . Applying Theorem 3.2 (by means of formula (3.9)), we obtain a Lipschitz function F ∈ C 1, ω (X) such that (F, ∇F ) = (f, G) on E, A(F, ∇F ) ≤ C M and Lip(F ) ≤ G ∞ +ω(1) M . Notice that Theorem 3.2 can be applied for ω and ϕ since the assumption that the modulus of continuity ω must satisfy lim t→∞ ω(t) = ∞ is not needed in the proof of Proposition 2.6. Finally, observe that, since ω ≤ ω, we have that F ∈ C 1,ω (X) as well. The class C 1,u B (X) Let X be a Hilbert space, and let C 1,u B (X) stand for the space of all Fréchet differentiable functions on X whose derivatives are uniformly continuous on each bounded subset of X. In this section we combine Theorem 4.1 with a standard partition of unity in order to characterize the 1-jets (f, G) which are restrictions to E of some (F, ∇F ) with F ∈ C 1,u B (X). Proof. For every x ∈ X, y, z ∈ E with |x − y| + |x − z| > 0, let us denote θ(x, y, z) := |f (y) + G(y), x − y − f (z) − G(z), x − z | |x − y| + |x − z| . Proof. From the above construction of ω k and α k it is clear that for all x ∈ B 3k and all y, z ∈ E ∩ B 3k . This implies that for all x ∈ B 3k and all y, z ∈ E ∩ B 3k . On the other hand, if x / ∈ B 3k and y, z ∈ E k then we have that |x − y| ≥ 1, |x − z| ≥ 1, and using the convexity of ϕ k we get Therefore we have Now we can apply Theorem 4.1 to find a function F k ∈ C 1,ω k (X) such that (F k , ∇F k ) | E k = (f k , G k ), with (F k , ∇F k ) bounded. Let us finally define Since the sum defining F is finite on every bounded subset of X, the functions ψ k , ∇ψ k , F k and ∇F k are bounded, and ∇ψ k and ∇F k are uniformly continuous on X, it is clear that F ∈ C 1,u B (X). Also, using the facts that ∞ k=1 ∇ψ k = 0 and F k (y) = f k (y) = f (y) and ∇F k (y) = G k (y) = G(y) if y ∈ supp(ψ k ) ∩ E, we have that, for each y ∈ E, F (y) = f (y) and ∇F (y) = ∞ k=1 ψ k (y)∇F k (y) +
11,813.2
2020-01-01T00:00:00.000
[ "Mathematics" ]
Development of GaN Technology-Based DC/DC Converter for Hybrid UAV Wide band-gap (WBG) semiconductors technology represents a potential candidate to displace conventional silicon (Si) technology used in power electronics. Between Silicon Carbide (SiC) and Gallium Nitride (GaN) power semiconductors, the latter is the least mature of both technologies, with many open research problems, especially in the aerospace industry. In this paper, we address the design and implementation of a DC/DC converter for a hybrid small unmanned aerial vehicle (UAV) based on GaN technology. Both theoretical and simulation comparisons of Si, SiC and GaN transistors for the converter are presented. The conclusion is that GaN devices are the most appropriate to fulfill converter requirements for the size and weight limitations of the selected UAV. The paper presents a buck converter which handles an input voltage range of 32 V to 40 V and provides a 12 V regulated output and output power up to 60 W. The experimental results carried out on the prototype converter show how promising the GaN technology is for aerospace systems, not only regarding its volume and size, but also its efficiency. Besides, practical implementation details are reported to contribute to the design of small, light and reliable GaN power converters for aeronautics. I. INTRODUCTION Traditionally, unnamed aerial vehicles (UAVs) have been used worldwide for military purposes. However, in the last decade, commercial and civilian usage of drones has rapidly increased. According to experts, the market for commercial and civilian drones will grow at a higher rate than the military ones in the coming years. In particular, the Federal Aviation Administration (FAA) estimates that more than 7 million small hobbyist and commercial UAVs are expected to be purchased by 2020 with 6.1 million sales for 2019 [1]. Drones have multiple civil uses such as search and rescue operations, surveillance, forest fire detection, package delivery, pollution and environmental monitoring, reconnaissance operations, or precision crop monitoring. Miniaturization and innovation of electronics [2] is one of the technological drivers that can transform UAV industry: The associate editor coordinating the review of this manuscript and approving it for publication was Jiankang Zhang . it will allow reducing the drone's overall weight and size, while improving efficiency. Small drones are often powered by lithium-ion and lithium polymer rechargeable batteries, but recently efforts have been made to extend the duration of the missions by using a combination of hybrid power sources, like fuel cells, batteries or solar cells [3]- [6]. This hybridization of different power sources with different power outputs requires an appropriate DC/DC converter. As the rest of electrical and electronic equipment on board of the UAV, this power converter must be as small, light and energetically efficient as possible, which poses a true technological challenge. The aim of this paper is to propose a DC/DC converter suitable for a hybrid UAV. This convertible UAV is currently being developed by a working team formed by researchers from several Spanish and Brazilian universities in collaboration with partners of the emergency medical care service (SAMU). The power converter must meet the desirable requirements of small size and weight while maximizing its energy efficiency. For this purpose, we investigated several emergent technologies in power electronics: wide bandgap (WBG) semiconductors as Silicon Carbide (SiC) and Gallium Nitride (GaN) [7]- [11]. Furthermore, we evaluated the advantages and disadvantages of its utilization in the area of small drones paying special attention to the feasibility of our proposal. This paper is organized as follows. Section II exposes the system requirements for the DC/DC converter that will be on board of the UAV currently under development. Section III reminds the principles of operation and design of the proposed buck converter. The main contributions of this paper are presented in the following sections: Section IV presents the result of comparing Si, SiC and GaN power transistors. Section V provides the experimental setup and results of the GaN-based converter along with a discussion on them. Finally, section VI presents the conclusions and future work. II. DC/DC CONVERTER REQUIREMENTS FOR HYBRID UAV Emerging technologies are radically changing traditional operating procedures in disaster relief and emergency response management. In particular, UAVs or also known as Remotely Piloted Aircraft System (RPAS), for those operated remotely, are proving themselves extremely useful in Search And Rescue (SAR) missions. They bring very advantageous capabilities such as rapid response, remote operation, transportation of equipment, monitoring of wide areas, and multisensor deployment. Rapid Intervention Vehicles (RIV) typically used in SAR missions have cargo space for at most one small RPAS of either fixed or rotary wing type. Fixed wing aircrafts are well suited for rapid deployment and remote monitoring but they require external means (runways or catapults) for takeoff and landing, and they cannot hover. Rotary wing aircrafts (such as quadrotors) have much less autonomy and range but they can hover and perform Vertical Take-Off and Landing (VTOL); thus, they are suited for restricted or inaccessible areas and sensor deployment. A third kind of aircraft, still relatively uncommon, is fixed-wing convertible aircraft with VTOL capabilities [12]. They have the advantages of fixed and rotary wing aircraft without most of their shortcomings. A convertible UAV would allow a first response RIV to operate a single multi-purpose, multi-mission aircraft. The UAV that is being developed is proposed to explore the use of convertible fixed-wing aircraft with VTOL capability specifically designed for SAR missions. The aircraft will be easy to operate and fast to deploy; to this mean, guidance, navigation and control algorithms will be developed considering the requirements of SAR missions. Besides, highly integrated and efficient embedded electronics will be developed to reduce the aircraft dimensions and increase its autonomy. Autonomy will be increased through use and management of renewable power sources such as solar and fuel cells. The interdisciplinary nature of this project guarantees its novelty and the different technologies that will be developed ensure a breakthrough in the way that UAVs are used in emergencies: currently, they only use either fixed wing or rotary wing technology due to the complexity and high-risk associated to develop this technology. The first studies conducted (last 4 years) in this field by the research team ( [13], [14]) jointly with companies in the emergency sector have determined that for maximum use the aircraft should have the following characteristics: -Small dimensions, so that it can be transported in a rapid intervention vehicle (VIR) for its integration into the coordinated rapid response measures in emergency situations. -Versatility to carry out a variety of SAR missions, from monitoring to surveillance, transportation of diverse medical payloads, or deployment of sensors, with the greatest autonomy and scope possible. -Simplicity and secure operation so that even a healthcare professional can use it with hardly any additional training. -Compliance with airworthiness requirements. Based on the established requirements, a series of work packages have been created that mark the development of the aircraft following a bottom up sequential design with concurrent engineering interaction between all the involved packages. One of the work packages is the development of high efficiency embedded electronic systems, in which this paper is motivated. Fig. 1 shows an example of partial result achieved to date: a scale model of the UAV-VTOL implemented to be tested in a wind tunnel. Some of the characteristics of UAVs such as weight (19 kg including structure, avionics, payload, and power sources) and size (wingspan of 2.26 m and a fuselage length of 1.67 m) will impose requirements on the weight and size of the electronic systems to be developed and integrated into the aircraft. Previous studies in the field of generation and storage of energy in the convertible UAV have established that the power supply to the engines and systems will be given by a hybrid system of hydrogen fuel cells, batteries and solar panels. This system, whose objective is to reach the optimum degree of hybridization to maximize the drone performance, will give a rated output voltage of 36 V which must be transformed through various power converters with the ultimate goal of VOLUME 8, 2020 feeding the different users on board such as motors, computers, actuators, and sensors. Fig. 2 shows a general description of the components that make up the electrical system of the convertible UAV under development. The power conversion system is made up of a DC-DC power converter plus an inverter to power each brushless DC electric motor of the UAV and a DC-DC converter for the rest of equipment. This work focuses on the DC-DC converter which will provide the supply voltage to the rest of the UAV's electronic systems. Thus, this converter must be capable of providing the voltages required for the possible subsystems that will go embarked on the UAV while delivering enough power for optimum operation. Other design objectives will be high efficiency and a reduced weight and size. Previous studies determined that with the avionics and payload that the convertible UAV would carry, it would be required a DC/DC converter able to deliver 12 V and 60 W to a load. Its input voltage can range from 32 V to 40 V, being 36V the rated input voltage. Other general requirements for the converter are: -Load regulation and line regulation must be comparable to those in similar commercial converters (between 0.5% and 3%, approximately). -It must be very efficient compared with silicon-based converters (these present a peak efficiency ranging from 86% to 92%). -It must be light enough; we think a good design would have a weight less than 1% of the weight of the UAV. -It must be small enough to be mounted inside the UAV fuselage together with the payload. The maximum total volume of the system will depend strongly on the model of UAV to use. In this paper, as reference mission profile for this class of UAV, we consider a 19 kg MTOW (maximum take-off weight) airplane cruising at 300 m AGL (above ground level). So, the maximum weight of the converter must be 190 g. Besides, since the variety of UAV fuselages existing nowadays, we estimate a maximum volume of the converter of 6 cm × 3 cm × 2 cm. III. PRINCIPLES OF OPERATION AND DESIGN OF A BUCK CONVERTER Since we require a DC/DC converter with very high efficiency, it is necessary to choose a switching regulator design versus a linear regulator. In order to provide the 12 V output from the input range [32 V, 40 V] we have chosen a classical non-isolated topology because of its simplicity and its wellknown efficiency: a step-down (buck) converter. Fig. 3 shows this topology where all the components (switches sw1 and sw2, inductor L and capacitor C) are assumed ideal. We have considered that our converter does not require electrical isolation between the input and output. Thus, we do not have to use a transformer to eliminate the dc path between its input and output, reducing the complexity, the number of components and the size of the converter. Although the aim of this paper is to study the impact of emergent devices on power converters for small drones (size, weight, switching losses, etc), this section presents a summary of the operating principles of the converter for those readers who are not familiar with power electronics. Operation of the buck converter is based in two stages: the ON and OFF stages. In the ON stage, with duration t on , the switch sw1 is closed and the switch sw2 is open. Then the current flows from the source V i towards the load, through the inductor L, and it will go increasing. The voltage induced across the inductor will counteract the voltage of the source and will reduce the output voltage. At the same time, the inductor will store energy in the form of a magnetic field. In the OFF stage, with duration t off , the switch sw1 is open and the switch sw2 is closed. Then the source voltage V i is disconnected from the circuit but the current i through the inductor will continue to flow due to the switch sw2, which is closed. The current magnitude will drop and, thus, the induced voltage across the inductor will change its direction. The inductor becomes a source to supply the load by releasing its stored energy. By switching between on-state and off-state at a constant frequency f s (switching time period T s = 1/f s = t on + t off ), the buck converter is able to produce a lower V o average voltage than the dc input voltage V i . Assuming steady state, the average output voltage is controlled by controlling the switch on and off duration (t on and t off ) and can be calculated [13] as Defining the switch duty cycle D as the ratio of the on duration to the switching time period (D = t on /T s ), (1) yields V o = DV i . So, by varying D, the output voltage V o can be controlled. Besides, V o is always less than or equal to the input since D is between 0 and 1. The converter can operate in two operating modes: continuous mode (CCM), where the current i through the inductor is always greater than zero, and discontinuous mode, where i can be canceled at some point. We have chosen that the converter operates in CCM because the relationship between V o and V i is linear (it only depends on D) and thus, it is easier to control the system. This method for controlling the output voltage by switching at a constant frequency f s and varying the duty cycle is called pulse-width modulation (PWM) switching. We have assumed that our buck converter is asynchronous, that is, its switch sw2 is implemented with a diode and we only need one signal to turn on or turn off the switch sw1. For the design of the converter, it is necessary to choose the elements so that the converter's performances could be optimum [15]. For given values of T s , V o , V i , L and D, if the average output current through the load I o becomes less than a certain critical value I oc , then the current i through the inductor will become discontinuous: Thus, in order to ensure continuous conduction mode (CCM), the value of the inductor L must be greater than a critical value L c given by Equation (3) shows that the value of L c must also consider the minimum current to be provided by the converter. Regarding the output capacitor C, in the previous analysis, it has been assumed to be so large to yield v o (t) = V o . Nevertheless, for a practical value of C, the ripple in the output voltage can be calculated according to where f c = 1/[2π(LC) 0.5 ] is the cutoff frequency of the lowpass filter composed by the inductor L and the capacitor C of the converter. This filter significantly reduces the output fluctuations by selecting f c f s . As well, (4) shows that the ripple is independent of the output load power in CCM. We have assumed that the switch sw1, the diode, the inductor and the capacitor are ideal and they have no associated losses. Furthermore, the power conversion efficiency η = P o /P i is unity or 100%. Nevertheless, real devices have parasitic effects that affect the performances of the converter such as the efficiency, the V o /V i ratio, the peak-peak voltage ripple, etc. So, the real capacitor will have a power loss which can be estimated by multiplying the root mean square (RMS) current through the capacitor by the square of its equivalent series resistance (ESR). Therefore, we must choose this capacitor with ESR as small as possible. In fact, considering the ESR in our converter, the ripple in the output voltage can be approximated to The real inductor will have losses that will reduce the efficiency and as well, we will have to ensure that it will handle the peak currents of the circuit. The real diode also presents an energy loss which affects the efficiency and operation of the power converter. The diode energy loss is composed of conduction and switching losses, where the turn-off energy loss is critical. So we could choose a Schottky diode or a fast-recovery diode. Schottky diodes are used in very low output voltage circuits because they have low forward voltage drop (typically 0.3V). Fast-recovery diodes are usually used in high-frequencies circuits. As to the transistor, which implements the controllable switch sw1 in Fig. 3, it could be a bipolar junction transistor (BJT), a metal-oxide-semiconductor field effect transistor (MOSFET), a gate turn off (GTO) thyristor or an insulated gate bipolar transistor (IGBT). Provided the low value of the currents and voltages of our converter, BJTs and MOS-FETs would be more suitable. Moreover, a MOSFET would be a better option than a BJT because of the ease of control, higher switching speed, lower switching power losses, lower on-resistance, and reduced susceptibility to thermal runaway. The choice of this device among the commercially available is extremely important in designing the power electronic converter and it depends strongly on the specific application. In the last decades one of the main trends to reduce the volume, weight and cost of the power converters has been to increase the switching frequency since it reduces the required size of the passive energy storing elements (inductors and capacitors). However, the increase in switching frequency also increases the switching losses and thus, the efficiency can become poor and the power semiconductors can fail due to overheating. Some methods for decreasing switching losses have been proposed: resonant converters (series, parallel and series-parallel) [15], multilevel converters [16] or new materials for power semiconductors [7], [17]. These materials are Silicon Carbide (SiC) and Gallium Nitride (GaN), both wide band-gap (WBG) semiconductors. Therefore, we have decided to investigate these new technologies of power devices to improve performances as efficiency, size and weight of a conventional power converter and be able to onboard it in the hybrid UAV we are developing. IV. COMPARISON OF Si, SiC AND GaN POWER SWITCHES In this work we explore the characteristics of real Si, SiC and GaN power switches in order to choose the most appropriate for our converter. First, we carry out a theoretical comparison of Si, SiC and GaN semiconductors and then, a simulation-based comparison of Si, SiC and GaN power switches. VOLUME 8, 2020 A. THEORETICAL COMPARISON OF Si, SiC AND GaN SEMICONDUCTORS Scientific literature on properties of semiconductor materials has been reviewed and significant variations in some values have been found [8], [17], [18]. In this paper, we present Table 1 as a good comparison among some key properties of the main semiconductors used for high-performance electronics applications. The main property of WBG semiconductors such as GaN and SiC is that they have higher band gaps than silicon. Thus, they have lower intrinsic leakage currents and can withstand higher operating temperatures than Si. Other property of GaN and SiC is that they have higher critical or breakdown field than Si, which allows them to operate with higher voltages. Then, in transistors with the same value of breakdown voltage, the layers of devices with GaN and SiC technology can be thinner than those of Si, which means smaller dimensions and higher power density. Besides, GaN has the critical electric field 1.5 times higher than SiC. The saturation drift velocity (maximum average drift speed that an electron reaches when the applied electric field is greater than a threshold value) is one of the main factors on which the switching capacity of semiconductor devices depends. It is higher in GaN and SiC than in Si. This characteristic, together with a greater mobility of electrons and holes than the SiC, makes the GaN the favorite material for operating at high frequencies. The thermal conductivity measures the heat conduction capacity of the materials. The evolution of temperature is a critical factor in the operation of semiconductor devices since the increase in temperature means a decrease in the mobility of electrons and therefore, a lower efficiency in operation. In addition, high temperatures can damage the devices and the other component around them. Regarding this issue, the SiC has a clear advantage over Si and GaN. It has a thermal conductivity twice that of Si and GaN, so it shows a superior capacity to transfer heat from the inside of the device to the outside. SiC is the best option for devices that operate at high temperatures. To facilitate the comparison of the power devices manufactured with different materials, a series of parameters called figures of merit (FOM) have been defined, which summarize some of the main properties of these materials. Some of them are: -JFOM: it was proposed by Johnson [19] and it estimates the potential of a material for high frequency and high power applications. For this, it considers the electric breaking field E c and the saturation velocity v sat . It has a value of about 1.1 × 1012V/s for silicon. -KFOM: it was proposed by Keyes [20] and it provides the thermal limitation to the switching behavior of the transistors, considering the thermal conductivity of the semiconductor λ, its dielectric constant ε r of the material, the velocity of light in free space c and the saturation velocity v sat . It has a value of about 1.17 × 1015W/deg/sec for silicon. The KFOM specifies the maximum switching speed of an electronic logic element. But these traditional figures of merit (JFOM and KFOM) did not do justice to GaN and SiC versus Si. Neither the application of semiconductor transistor in the final application circuit nor its reliability was considered. So Baliga [21] derived a figure of merit which defines material parameters to minimize the conduction losses in power transistors, considering the electron mobility µ n , the band gap of the semiconductor E g and the dielectric constant ε r of the material: The BFOM is based upon the assumption that the power losses are solely due to the power dissipation in the onstate by current flow through the on-resistance of the power transistor. Thus, the BFOM applies to systems operating at lower frequencies where the conduction losses are dominant. Later, Baliga [22] proposed a new FOM, with dimensions of frequency, for devices operating at high frequencies: where R ON and C iss are the specific on-resistance and capacitance, which are both determined by the material characteristics and the device cell design. In fact, the BHFFOM can be rewritten in terms of the material parameters: where V G and BV are the applied gate bias voltage and the breakdown voltage, respectively. The BHHFOM estimates the potential of a material for high power applications and high frequencies where the switching losses due to the charging and discharging of the device become more important. Table 2 shows the mentioned figures of merit for SiC and GaN normalized with respect to Si [23]. SiC and GaN present better performance for high frequency and high power applications than Si. Moreover, GaN transistors have higher operating efficiency at higher frequencies than SiC transistors; this advantage is reflected in higher values of BFOM and BHFOM. In any case, although these figures of merit are frequently used, they are approximate estimations of the operation of the devices. They do not contemplate the parasitic resistances and other effects that limit their operation. So other figures of merit can be defined to show more effects [24]. B. SIMULATION-BASED COMPARISON OF Si, SiC AND GaN TRANSISTORS In this section, we will perform an analysis by simulation of the behavior of GaN, Si and SiC transistors in order to determine the advantages and disadvantages of the GaN technology. We will study characteristics such as, among others, thermal behavior, energy losses and characteristic curves. This study will be carried out through simulations with the simulation program SPICE from the models provided by the manufacturers. In order to compare different technologies, it is necessary to choose devices with similar design parameters of drain-source breakdown voltage (BV DS ), maximum resistance (R DS(on) ) and maximum continuous drain current (I DS ). Since the GaN transistors catalog is still reduced, the initial choice will be made on GaN devices and, later, we will choose Si and SiC transistors with similar parameters. We have chosen the transistors shown in Table 3, which have good models of thermal behavior and high-frequency behavior. Simulation results indicate a good agreement with the data provided by datasheets from manufacturers. These transistors have high breakdown voltages (650 V), maximum continuous drain currents around 30 A (except the model IPB65R190C7, which has 13 A) and resistances R DS(on) between 50 m and 190 m . We have not found Si and SiC transistors with values of R DS(on) as small as in the GaN transistors and comparable values of I DS . Some of the advantages of GaN technology are based on this fact. Besides, one of the advantages of the selected GaN transistor is its small size compared to the other devices. Table 3 shows the dimensions of the transistors obtained from their datasheets. Although the area (width per length) of the GaN and Si transistors is similar, the SiC transistor has an area that triples it. Moreover, the GaN transistor presents a laminated packaging much finer than the usual TO-XX, up to 10 and 20 times smaller in volume. Despite the IPB65R190C7 Si transistor has a R DS(on) of 190 m , larger than the rest, it has been included in Table 3 because of its lowest price. But if we choose a Si MOSFET with similar R DS(on) to a GaN or SiC transistor, its price is not much lower. So, hereinafter, we will only consider the transistors GS66508P (GaN), SCT3080ALGC11 (SiC) and IPB65R065C7 (Si). Fig.4 shows the simulated gate-to-source threshold voltage (V GS(th) ) of the chosen transistors depending on the operating junction temperature (T j ), which has been varied from −50 • C to 150 • C. The GaN transistor keeps the V GS(th) value relatively constant with temperature: only an increase of 0.15 V is shown. In the case of Si and SiC transistors, it is shown that V GS(th) decreases about 2 V. In addition, the V GS(th) values for GaN transistors are much smaller than those of other technologies and this is an advantage to drive the device from microcontrollers with small output voltages (for example, 5 V). One of the most important parameters of a power transistor is the drain-source on-state resistance R DS(on) , which determines the conduction losses. Fig. 5 shows the R DS(on) values (in m ) of each chosen device for a specific gate-to-source voltage V GS and junction temperature values T j between 25 • C and 150 • C with an increase of 25 • C. In order to compare more justly the transistors according to the R DS(on) , we have taken into account the maximum V GS of each one since it will provide the minimum R DS(on) : 7 V for the GS66508P, 20 V for the IPB65R065C7 and 22 V for the SCT3080ALGC11. So we have chosen V GS of 6 V for the GS66508P and 18 V for the IPB65R065C7 and the SCT3080ALGC11. Besides, given that the R DS(on) decreases if the drain current I DS increases, we have remarked the value of R DS(on) for a same value of I DS (9 A) at 25 • C and 150 • C in all cases. So, if we had chosen V GS of 6 V for all the transistors, the GaN transistor would be the best regarding to the R DS(on) (50 m versus 65 m of the Si one and 125 m of the SiC one). However, if we establish fairer conditions to compare them, as said before, we can observe that at 25 • C the GaN transistor keeps the VOLUME 8, 2020 We have studied the time evolution of the junction and case temperature for the chosen transistors with and without an external heat sink. A thermal interface material (TIM) has been attached to the heat sink in order to improve heat transfer. We must mention that the manufacturer of the chosen GaN devices implements a new packaging: the discrete device is embedded within a laminate construction so that it leads to smaller volume, lower resistance and lower inductance than a conventional packaging. In the case of the Si device (package TO-263), a heat sink 573300D00010G has been selected. For the SiC device (package TO-247-3), a heat sink R2A-CT4-38E has been selected. And for the GaN device, we have considered the cooling method recommended by the manufacturer for the GaNPX package: a bottom side cooling with a heat sink via a PCB attached to the thermal pad of the device. In this case we have selected a heat sink MPC14-14. Table 4 shows the value of the different thermal resistances involved in the heat transfer process: R θJC (junction-to-case thermal resistance), R θTIM (TIM thermal resistance), R θHSA (heat sink to ambient thermal resistance), R θ PCB (PCB thermal resistance) and R θJA (junction-to-ambient thermal resistance). The GaN device presents smaller R θJA than Si and SiC, both with and without heat sink, and so, it will dissipate heat better. Fig. 6 shows the time evolution of case temperature T C for the chosen transistors with and without heat sink when they are continuously switching (at 1 kHz and similar dissipated average power) during an enough time interval. The effect of the heat sink is clearly seen in all the graphs: every device experiences a much lower temperature increase with an external heat sink. Moreover, the GaN transistor shows significantly lower temperature increases than Si and SiC devices both with and without heat sink. As well, it shows times for stabilization temperature of up to one hundred times smaller than the other. Finally, we have checked by simulation that the transistors switching times are in good agreement with the data in the datasheet. In fact, the Si and SiC transistors cannot switch fast enough beyond 10 MHz while the GaN transistor presents this limit at 100 MHz. Simulations have been performed at frequencies of 50KHz, 500 KHz, 1 MHz and 10 MHz and we have collected data of energy and power losses both in conduction and in switching turn-on and turn-off. Fig. 7 shows a diagram of the percentage of energy dissipated in conduction, switching on and switching off over the total energy loss of each transistor for different frequencies. As expected, switching losses increase with frequency. Fig. 8 shows a diagram of average power dissipation of each transistor for different frequencies. It can be observed that the power loss of the GaN transistor is smaller and, besides, it is more noticeable with frequency. V. EXPERIMENTAL SETUP AND RESULTS OF THE GaN-BASED CONVERTER In order to implement the converter, all its components have been chosen carefully taking into account the design considerations explained in Section 3 and seeking to obtain high efficiency in the system. Using components models as realistic as possible in simulation, we can estimate the influence of the real characteristics of the elements on the converter's performances. The final selected GaN transistor is a GS61004B because it implies a good compromise between the R DS (on) (15 m ) and the input capacitance C ISS (295 pF) comparing it with other GaN transistors and, besides, it only costs about 6 $. It has a BV DS of 100 V which is sufficient for our application. We have opted for an Arduino Due to implement the PWM control. It is a microcontroller board based on the Atmel SAM3X8E ARM Cortex-M3 CPU, easy to use and with enough power for our application ful enough. Since it runs at 3.3 V, a gate driver is needed to directly drive the transistor. We have searched a specific driver for GaN transistors, the LMG1205YFXR, which is a MHz gate driver whose high-side bias voltage is generated using a bootstrap technique and is internally clamped at 5 V, which prevents the gate voltage from exceeding the maximum gate-source voltage rating of the transistor. Since our aim is to demonstrate the advantages of using GaN technology in a converter for a hybrid UAV, we have built a modular prototype. So we have made in an easier way the necessary changes and components substitutions that usually arise in a laboratory to achieve the goal. Since the driver is a 12-pin DSBGA (Die Sized Ball Gate Array) package very small (2 mm × 2 mm of body size) and difficult to solder, we have built a shield that allows different external configurations. Fig. 9(a) shows the driver shield. Fig. 9(b) shows the Printed Circuit Board (PCB) we have also built to test the operation of the driver with the GaN transistor separately from the converter. For the load, we made an arrangement that consisted of five rows of four precision military grade resistors each one. Individually one of these resistors could withstand a maximum of 3 W. Therefore, each row could dissipate up to 12W. The set allowed to connect the desired rows, obtaining a consumption of 12 W per row connected. The inductor, diode and capacitor of the converter have been mounted onto other PCB. Finally, we substituted the PCB with the GaN transistor for another with some transistors in parallel for more flexibility. Fig. 10 shows the final prototype that serves as proof-of-concept of the proposed GaN-based converter. In this final prototype we have also used three 68 µF electrolytic capacitors in parallel, a power inductor of 47 µH and a diode VS-12CWQFN-M3. It has been checked the operation of the converter by several experimental setups. Firstly, as we mentioned before, we have tested each module separately and, afterwards, the whole system. In a first step, we have checked the correct response of the driver connected to some tests boards: one, with a low-side transistor and other, with a high-side transistor. Tests were carried out for a frequency range checking the gate-source signal on the oscilloscope. In addition, we tried with different values of the bootstrap and bypass capacitors and external resistances to the gate of the transistor to observe the response and the gate voltage oscillations due to parasitic inductances. We have checked frequencies up to 2 MHz. The first tests showed the open-loop converter worked properly. Fig. 11 shows the experimental setup to test the converter. We varied the duty cycle and the operation frequency. Nevertheless there were ringing effects in the circuit and they became more serious when the input voltage increased. So, as a precaution, we tried to solve the ringing effects making initial measurements at low power, with small values of input (around 12 V) and output voltage (around 4 V). We obtained measurements of power efficiency for different values of switching frequency (from 60 kHz to 200 kHz) and they kept all beyond 91% and more or less constant. Then we decided to fix the frequency at 100 kHz for the following tests. In order to illustrate ringing effect and how it could damage the GaN transistor Fig. 12(a) shows the oscillations observed in the gate-source voltage waveform around the switching instants, when source voltage changes at a faster rate. Since the maximum rating for V GS is (−10 V, +7 V) and for a transient of 1µs V GS(transient) is (−20V, +10 V), the oscillation when the transistor turns on is potentially dangerous. In order to reduce ringing effect, we applied an optimal combination of measures: two external gate resistors between the driver and the gate of the transistor (R G(on) and R G(off) of 47 ) and a snubber circuit between the source of the transistor and ground (R snubber ≈ 35 and C snubber ≈ 2.2 nF). Fig. 12(b) shows how our anti-ringing measures make V GS present good behavior even with high voltages at the input (37.2 V in the figure). These anti-ringing measures reduce the total efficiency of the converter, but we think they are necessary since our converter is intended for an aeronautical application. For this same reason, as additional safety measure, we have added a 5 V zener diode between gate and source of the transistor, although it also contributes to power losses. Then, the following step has been to check the operation of the converter in open loop and for high power. We have taken initial data by varying the input with voltage values between 22 V and 50 V for current through the load of a 1 A. Table 5 shows the different measurements taken for this case as well as the efficiency. The circuit works properly: with the theoretical value of D to give a 12 V output, although the converter is in open loop configuration, it provides 11.5 V approximately in all cases. The values of efficiency are also satisfactory, regardless of the incorporated safety measures (which increase power losses) and the fact that the board is not optimized yet to reduce the parasitic effects of connections. In a final and integrated prototype, the expected efficiency will be even better. We have also taken a series of measurements for different values of output current. The numerical results allow verifying the correct operation of the circuit with the increase in power. The converter shows a small decrease in efficiency with the increase of power as well as a small decrease in the reduction ratio with respect to the theoretical one for each D value. To illustrate the most significant results we present the Fig. 13, which shows the efficiency curve interpolating the points obtained including an input voltage sweep from 32 V to 42 V in steps of 1 V and for values of the input current from 1 A to 5 A with 1 A step increment. We can observe clearly how the efficiency is very high for small power values and, as expected, it decreases when the operating power increases. Finally, we have checked the operation of the converter in closed loop, that is, in its normal operation mode inside the aircraft. In this case, it is necessary to use a voltage divider to scale and connect the converter output safely to one of the analog inputs of the Arduino Due (the maximum voltage input that can be read is 3.3 V). The two resistances used for the divider are 22 k and 4.7 k . Table 6 shows some significant measurements. In closed loop configuration, as expected, we have observed that the errors in the output voltage and current are much smaller than in the open loop case. The output is constantly being corrected, by means of the continuous modification of the D value, with satisfactory load regulation and line regulation (between 0.8% and 2.5% in any case). The converter has also shown a good behavior regarding the output voltage ripple. By way of illustration, Fig. 14(a) shows an example of the output voltage measured in an oscilloscope with CC coupling. Fig. 14(b) shows the output voltage in AC coupling in order to highlight the output voltage ripple (240 mV pp ). Fig. 15 shows the converter efficiency for a 36 V input in open-loop and closed-loop configurations. In both cases, with the increase in the load current, the efficiency decreases due to the increase in the conduction dissipation of the components. Efficiency results are very promising. They range between 85% and 99% despite the fact that we have added a snubber and resistances between the gate driver and the transistor to alleviate the ringing effect. We have also added a zener diode as safety measure to protect the transistor and a 220 µF input capacitor to reduce the ripple voltage amplitude seen at the input of the converter (although its equivalent series resistance should have been lower). Obviously, all these additional components contribute to decrease the overall efficiency. In addition, even though the converter is not optimized yet and it is a proof-of-concept prototype, it represents a technological demonstrator of the benefit of applying GaN technology in power converters for aircrafts. VI. CONCLUSION AND FUTURE WORK In this work we have explored the use of SiC and GaN technologies versus Si technology in a converter for a small hybrid UAV, checking their advantages and disadvantages in a real case of aeronautics. After theoretical studies and simulations, GaN technology has turned out the best option because the power switches present these properties: lower size, weight and a combination of lower switching and conduction losses which favors higher efficiencies. Therefore, a GaN-based buck converter was designed and implemented. Since GaN power switches technology is not a mature technology, its benefits are not fully realized if they are treated as drop-in replacements for Si devices. It is necessary to conduct research in order to optimize the properties of the GaN switches and minimize size and costs of cooling systems and auxiliary circuit components. The converter developed has been tested experimentally in the laboratory. Results confirm its proper functioning. We can say that our design already fulfils most of requirements. It is able to provide 12 V as regulated output from an input voltage in the range of 32 V to 40 V and an output power up to 60 W with small line and load regulations and a high efficiency (in the range of 85% up to over 99%). These promising efficiency results have been obtained even though the built converter is not optimized regarding integration and minimization of connections. Based on the evidence found in this work, the final prototype is sure to meet the size and weight requirements. In the future we intend to integrate all the components in a board reducing the effect of parasitic capacitances and inductances and mitigating the ringing phenomenon without noticeably decreasing the efficiency. One possible improvement for closed loop operation would be the replacement of the Arduino Due with a dedicated controller, designed and manufactured specifically for converter applications, where the transient response was optimized to reduce the possible losses. We will also extend the number and coverage of the experiments to demonstrate the performances of the system, for example, sweeping the switching frequency and studying the issue of electromagnetic interference (EMI). Finally, we aspire to adapt the prototype to the current UAV being developed and achieve its full operation.
9,914.8
2020-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Children’s Use of and Experiences With a Web-Based Perioperative Preparation Program: Directed Content Analysis Background Web-based technology is useful as an alternative means of providing preparation programs to children in pediatric care. To take full advantage of Web-based technology, there is a need to understand how children use and learn from such programs. Objective The objective of this study was to analyze children’s use of and experiences with a Web-based perioperative preparation program in relation to an educational framework of children’s learning. Methods This study is the final part of a three-phase study in which all families with children aged 3 to 16 years (N=32) admitted for outpatient surgery over 1 week were asked to participate. Children were interviewed before (phase 1) and after (phase 2) anesthesia and surgery and 1 month after hospitalization (phase 3). The data in this study (phase 3) relate to six children (5 to 13 years) who participated in the follow-up interviews in their homes a month after hospitalization. The study used a directed qualitative interpretative approach. The interviews were conducted in a semistructured manner as the children—without guidance or influence from the interviewer—visited and navigated the actual website. The data were analyzed based on a combination of the transcribed interviews and field notes, and were subjected to a previous theoretical investigation based on children’s learning on a website in pediatric care. Results Six children, five boys (5-12 years) and one girl (13 years), participated in the follow-up study in their homes a month after hospitalization. The children were selected from the 22 initially interviewed (in phases 1 and 2) to represent a variation of ages and perioperative experiences. The children’s use of and experiences with the website could be explained by the predetermined educational themes (in charge of my learning, discover and play, recognize and identify, and getting feedback), but additional aspects associated with children’s need for identification, recognition, and feedback were also revealed. The children used the website to get feedback on their own experiences and to interact with and learn from other children. Conclusions This analysis of children’s use of and experiences with a Web-based preparation program emphasizes the importance of including a theoretical educational framework of children’s learning in the development and design of websites in pediatric care. Creating opportunities for Web-based communication with others facing similar experiences and possibilities for receiving feedback from adults are important factors for future consideration. and understanding of what we encounter and experience is dependent on continuous stimuli from other people and the environment. The individual takes on values and desires of others and assesses successes and failures in relation to others and the social context. Web-based technology opens up possibilities and offers approaches to learning in congruence with the assumptions about learning mentioned above [9][10][11][12]. Interactive play constitutes a positive driving force for learning without constraints and compulsion and enables children to be active and with the use of all senses explore new ways of understanding for learning [9,12,13]. Compared to "traditional education materials" webbased technology can expand the range of things that children can create and in doing so enable them to encounter ideas that were previously, without the new technology, not accessible to them [10]. As a learning tool web-based technology offer a number of advantages also in the health care context, including the availability, tailoring of information for the individual needs, a private learning environment and an immediate reinforcement of the learning that has occurred [14][15][16][17][18]. Web-based technology also enables contact with experts or others facing similar health challenges. The social integration and sharing of information that occurs through these connections may increase patient´s involvement, learning and understanding of their medical conditions [18]. New communicative conditions make learners become not only consumers of information but also producers of information and active members of learning communities. With new resources for communication, new demands and new possibilities are raised for learning [19]. Pre-understanding Pre-understanding, described by several researchers [2,4,8] interpretation of the world always starts with what is already know which helps to understand but also to react if something seems odd, different or frightening. Although the awareness of the pre-understanding it is often not apparent it will direct the individual attention and action. Pre-understanding can thereby be a barrier for learning when thinking gets obstructed and the ability to see and consider other perspectives decreases. Children bring varied levels of experience and learning preferences to the educational environment that is offered. From a pedagogical perspective it is a challenge to understand existing features of the pre-understanding of a group of children, like children of a certain age, as well as the variety of children's individual pre-understanding within a group. Preunderstanding of children in the same age will vary depending on their previous experiences, knowledge and approaches to learning. Careful consideration needs to be given to what sort of information children should receive and when and how it should be provided [16]. Motivation Consensus can be found among pedagogical researchers that the learner´s motivation is vital to stimulate the start and maintenance of a learning process. Some common features related to the characteristics of motivation have been highlighted in different learning theories [1,8,20,21]. The experiences of meaningfulness are crucial to stimulate the motivation to learn. The learner has to be driven by a will to understand and/or manage something. To learn has to be important, out of different reasons. Meaningfulness can be triggered both by external factors, like "I will get a reward of some kind" or "someone will be very proud if I manage something", or internal factors, like "my curiosity is awakening" and "I want to find out how something works". Meaningfulness is also triggered when previously approaches used to solve problems are not working and new questions needed to be answered and investigated arise. The individual experiences an urgent need to understand and begins to search for information of different kinds in order to cope with the situation. The experiences of something being fun and exciting are also important for meaningfulness [22]. According to Piaget [20] humans pursue equilibrium in relation to the environment. Each action requires an interpretation of what we see and experience (assimilation). Insufficient understanding creates imbalance and the searching for explanations via reconstruction of thoughts, searching for explanations and understanding, to restore the balance (accommodation), starts. Achieving balance, to understand and manage, becomes an important form of feedback which, in turn, stimulates continued learning. Motivation is stimulated both by a challenge and the experience of having to master something, as well as by the feeling of succeeding [7]. Feedback on the learning achievements has turned out to play an important role in stimulating motivation and is also part of experiencing meaningfulness [23,24]. Learning processes The individual's processing of information in different ways and on different levels is central and constitutes the essence of the learning processes. The learner does not only receive information but also interprets and connects the information to already existing knowledge and thereby constructs new understanding. Feedback on learning achievements is very important this learning process [23,24]. Knowledge is stored, interpreted and incorporated in the memory, in the brain using concepts related in sematic networks. In order to recognize situations, facts and solve problems, new knowledge has to be associated with the individual existing conceptual structure. New information must have a new meaning for the individual to be included and perceived as part of the whole. This highlights the importance for the learner to process perceived problems and questions and not only be offered a complete answer [25]. A creative learning process can be based on an investigative approach to the situations and problems encountered by the learner. All senses are needed to capture new information and processing existing knowledge cognitively, emotionally and by action. By processing the new information, analyzing the old and new understanding, new understanding and knowledge can be constructed [2,4]. Play constitutes a central part in children's life and an important part of their learning process [19]. Interactivity is an important part of children's play which enables for children to learn by using all their senses to understand the situations encountered [13]. Buytendijk defined play as an activity that is not oriented towards a specific goal; but a phatic way of perceiving the world where "the player" has an inner urge to move [26]. The concept "play", which relates both to "free" and rule-based activities, has connotations of many different kinds of activities and meanings for both children and adults. Play, playfulness and imagination can, in an overall perspective, be understood as a process of engagement, transformations of signs meaning making, reflections and meta-reflection. This perspective relates play to learning activities. Learning involves playing activities and playing can also be understood as learning activities [19]. Children's learning processes differ related to their age and cognitive development level, but also individually, like how much information they can process and how long they can keep up their attention. The outcome of learning Learning processes are meant to result in understanding, ability to perform skills and maybe changed attitudes and behavior depending on the learning situation [1,2,25]. In this case the learning goals are related to children and parents being prepared for a hospitalization and more specifically for anesthesia and surgery. This means for the child to understand what is going to happen and being able to cope with the situation. Of importance is also that both children and parents experience safety and confidence. The outcome of children's learning on a web-site will appear mainly when they attend the hospital, which may be too late. Thus, it is important to support children´s learning processes by enable optimal prerequisites for them to evaluate their learning via the web-site prior to the hospitalization. Feedback on the learning achievements can support the learner to be confident that the message is understood correctly or make visible that one need to repeat or try again [23,27]. The use of web-based technology has been shown to be associated with improvements in children´s development of concepts and cognition, knowledge and skills for thinking, planning, observing, problem-solving, creativity, reading, language, mathematics, hypothesis formation and testing. Well-designed web-based learning activities can improve skills of abstract thinking, reflective thinking, analyzing and evaluating information and scientific reasoning [9][10][11][12]. The dynamic nature of web-based technology seems to improve comprehension and help children to create mental models [28] and concretely explore abstract scientific concepts that would have been difficult to manipulate and learn without electronic components [9]. It helps them to understand health concepts and their complex relationship and to formulate thoughtful and plausible theories about the events that occur behind the observable data [29]. Web-based technology has also been found to be effective for comprehension and recognition of unfamiliar words [12], understanding of cause and effect [30] and for introducing children to abstract concepts, that were previously considered too advanced for their age group [9]. Technology-based activities can also engage children in collaborative learning, reasoning and problem-solving activities that had been thought to be too sophisticated for them to understand and carry out at very young ages [12]. The fact that children are not only served the content but must be active stimulates creativity and imagination which leads to engagement and an extended attention span [9,11,31].
2,620.8
2019-04-12T00:00:00.000
[ "Medicine", "Education", "Computer Science" ]
Study of $\psi(3770)$ decaying to Baryon anti-Baryon Pairs To study the decays of $\psi(3770)$ going to baryon anti-baryon pairs ($B\bar{B}$), all available experiments of measuring the cross sections of $e^+e^- \to B\bar{B}$ at center-of-mass energy ranging from 3.0 GeV to 3.9 GeV are combined. To relate the baryon octets, a model based on the SU(3) flavor symmetry is used and the SU(3) breaking effects are also considered. Assuming the elctric and magnetic form factors are equal ($|G_E|=|G_M|$), a global fit including the interference between the QED process and the resonant process is performed. The branching fraction of $\psi(3770) \to B\bar{B}$ is determined to be $(2.4\pm0.8\pm0.3)\times10^{-5}$, $(1.7\pm0.6\pm0.1)\times10^{-5}$, $(4.5\pm0.9\pm0.1)\times10^{-5}$, $(4.5\pm0.9\pm0.1)\times10^{-5}$, $(2.0\pm0.7\pm0.1)\times10^{-5}$, and $(2.0\pm0.7\pm0.1)\times10^{-5}$ for $B=p, \Lambda, \Sigma^+, \Sigma^0, \Xi^-$ and $\Xi^0$, respectively, where the first uncertainty is from the global fit and the second uncertainty is the systematic uncertainty due to the assumption $|G_E|=|G_M|$. They are at least one order of magnitude larger than a simple scaling of the branching fraction of $J/\psi\to B\bar{B}$. I. INTRODUCTION The ψ(3770) is the lowest lying 1 −− charmonium state above the charmed meson pair threshold. It decays dominantly into D 0D0 /D + D − while the decays to the light hadron (LH) final states are OZI-suppressed. It is still unclear about the nature of ψ(3770). If it is a pure cc bound state, the branching fraction of ψ(3770) into non-DD decays ranges from less than 1% from the potential models [1,2] to about 5% from the non-relativistic QCD calculations [3,4]. If ψ(3770) has a four-quark admixture, the total non-DD branching fraction could be up to 10% [5]. Experimentally, the BES collaboration reported a large non-DD branching fractions of (14.5 ± 1.7 ± 5.8)% [6][7][8] neglecting the interference between the ψ(3770) resonant amplitude and the QED continuum amplitude. Only considering the interference between the one-photon amplitude of the ψ(3770) resonance and the QED continuum amplitude, the CLEO collaboration found this branching fraction to be (−3.3 ± 1.4 +6. 6 −4.8 )% [9]. To clarify the disagreement, many exclusive non-DD decays with the light hadron final state have been searched for using two methods [10][11][12][13][14]. One method is to compare the cross section at the center-of-mass (c.m.) energy ( √ s) close to the ψ(3770) nominal mass and that far from any charmonium resonance (for example, the two energies are 3.773 GeV and 3.671 GeV for the CLEO collaboration). Only for the final state φη, there is a significantly excessive cross section at √ s = 3.773 GeV [10]. The other method, al-lowing to consider the complicated interference effect, is to perform a scan around the ψ(3770) resonance. Using this methd, the BESIII collaboration reports that the line shape of the cross section shows a deficit in the vicinity of the ψ(3770) for the final states pp and ppπ 0 [14,15]. Furthermore, there is a two-solution ambiguity for the branching fraction of ψ(3770) → pp/ppπ 0 , which cannot be solved from the scan experiment. Recently, an evidence of ψ(3770) → K + K − was also found by studying the cross section of e + e − → K + K − above 2.6 GeV [16]. We focus on the decays of ψ(3770) going to baryon anti-baryon pairs (BB). Here B = p, Λ, Σ + , Σ 0 , Ξ − and Ξ 0 . All available experiments measuring the cross section of e + e − → BB at the c.m. energy from 3 GeV to 3.9 GeV are combined. In Sec. II, we will present the born cross section formulas of e + e − → BB and introduce the model to relate all the baryon octet states. In Sec. III, we will review the available experiments and describe the fit strategy. The results will be shown and discussed in Sec. IV. A short summary will be given in Sec. V. The Born cross section of the QED process e + e − → γ * → BB at the center-of-mass energy √ s can be written as The resonance production cross section of e + e − → ψ(3770) → BB is written as where M 0 = 3773.15 MeV/c 2 and Γ 0 = 27.2 MeV [18] are the nominal mass and total width of ψ(3770), Γ e (Γ B ) is the partial width of ψ(3770) → e + e − (BB). Γ B can be written as where |F B M | and |F B E | are the form factors. The form factor ratio |G E /G M | is 1 at the baryon pair threshold, but may have small deviations above the threshold. The predicted behavior is model-dependent (see for example Ref. [20,21]). Experimentally, the form factor ratio is measured to be consistent with 1 within the uncertainties for the proton [19,22] in the region 2.2 < √ s < 3.1 GeV and for the baryon Λ [23] in the mass region from the threshold to 2.8 GeV. However, the measurement of the neutron form factor from the threshold up to 2.44 GeV [24] indicates |G E | = 0. Throughout this paper, we assume that The effect of this assumption will be considered. The nucleon electromagnetic form factors in the timelike region have been extensively reviewed in Ref. [25]. Here, the form factors G B and F B take the following forms from a calculation in Ref. [26] and Here Λ = 0.3 GeV is the QCD scale parameter, C B and A B are the free parameters. In Eq. 5, the first term represents the electromagnetic interaction amplitude of the ψ(3770) and the second term represents the OZIsuppressed strong decay amplitude of the ψ(3770). Two phase angles φ and φ are introduced relative to the QED process. φ represents the phase difference between the electromagnetic amplitude of the ψ(3770) resonance and the QED continuum amplitude. In many analyses (for example Ref. [14,29]), this phase difference is assumed to be 0, namely, φ = 0. We will find that the effect of the nonzero φ is also negligible in the case of ψ(3770) → BB. Therefore, the total cross section considering the interference between the processes e + e − → γ * → BB and e + e − → ψ(3770) → BB is constructed as To relate the form factors for all baryon octets, the SU(3) flavor symmetry is imposed. We also consider the SU(3) breaking effect due to the electromagnetic interaction and the quark mass difference of m s −m u/d . For convenience, we introduce the matrix notations. The SU(3) octet baryons and anti-baryons are described by the matrices B andB respectively. where g, d, f, d , f are the coupling constants, "T r" represents the trace of a matrix, "[a,b]" and "{a,b}" denote the commutator and the anticommutator of the two elements a and b respectively, and the matrices S e and S m are defined as In the right-hand side of Eq. 9, the first line represents the OZI-suppressed strong amplitude, the second line represents the one-photon electromagnetic amplitude, and the third line represents the SU(3)-breaking contribution due to the quark mass difference (more details about the effective lagrangian can be found in Ref. [27][28][29]). From Eq. 9, we can derive the following relations for the form factors G B and F B (or equivalently C B and A B ). Here the free parameters C 1 , C 2 , A 0 , A 1 , A 2 are real numbers in practice. A. Experimental review We starts with the reaction e + e − → pp for which the most data sets have been accumulated. The BE-SIII collaboration has performed a scan from 3.65 GeV to 3.90 GeV and a deficit is found in the vicinity of the ψ(3770) [14]. Considering the interference between the QED process and the ψ(3770) resonant production, two solutions are found for the partial width of ψ(3770) → pp with equal goodness of fit. But Ref. [14] has not reported the statistical significance of the solutions. To solve this two-solution ambiguity, more experimental information is needed. The results from the studies of the proton form factors from the CLEO [30,31], the BES/BESIII [19,32] and the BABAR [22,33] collaborations can be used. The former two collaborations measure the cross section of e + e − → pp using Eq. 11. where i denotes the energy point, N obs i is the observed number of signal events, L i is the luminosity, i is the efficiency, and (1 + δ i ) is the radiative correction factor [34][35][36]. The BABAR collaboration utilizes the initial state radiation (ISR) technique [37]. The process is e + e − → γpp, where the photon can be required to be detected [22] or undetected [33]. The cross section of e + e − → pp at the c.m. energy of the pp invariant mass M pp can be extracted according to Eq. 12. where (dN/dM pp ) corr is the mass spectrum corrected for the mass resolution effect, dL/dM pp is the ISR differential luminosity [37], (M pp ) is the detection efficiency, and R is the radiative correction factor. For the final states ΛΛ, Σ +Σ− , Σ 0Σ0 , Ξ −Ξ+ and Ξ 0Ξ0 , the CLEO and BES/BESIII collaborations [11][12][13] have measured the cross sections at the peak of the ψ(3770) resonance. Neglecting the interference effect with the QED process e + e − → γ * → BB, there is no significant excess compared to the cross section at an energy point far from any charmonium resonance. The BABAR collaboration also studied e + e − → ΛΛ/Σ 0Σ0 /ΛΣ 0 using the ISR technique and provided the upper limit of the cross section at the 90% confidence level (CL), which will be used as a cross-check for our results. All the data sets used in the following fit are summarized in Table I and Table II. The denotations for the pp final state in the first column of Table I will be used consistently throughout this paper. B. The fitting strategy To combine the results from various experiments, we should consider the statistical uncertainties and the systematical uncertainties correctly. For the number of signal events N obs , it is either obtained by simply neglecting the background and counting the number of events or extracted by subtracting the background events from the total number of events. Either way leads to a systematical uncertainty. At all energy points in an experiement, [22] e + e − → γpp 3.0-4.0 BaBar (SA) [33] e + e − → γpp 3.0-4.0 CLEO [30,31] e + e − → pp 3.671, 3.772 the luminosities are measured using the same method, the signal events are selected using the same set of conditions, and the radiative correction factors are obtained in the same way. Thus the systematical uncertainties related to them are independent upon the energy point and will be considered by introducing a free normalization factor for each experiment. We starts with the case of proton. A χ 2 is constructed in Eq. 13 for each experiment except for the "ψ scan" experiement [14]. where α denotes the experiment, i α denotes the i-th energy point for the experiment α, N obs is the observed number of signal events, λ is the expected number of signal events and defined as λ ≡ σL (1 + δ) or σL R as indicated in Eq. 11 and Eq. 12. (∆N obs ) tot. is the quadratic sum of the statistical uncertainty of N obs and the systematical uncertainty due to the background subtraction or neglecting the background events. ∆ is the statistical uncertainty of the efficiency determined from a limited MC sample. ξ 2 ind. is the quadratic sum of the systematical uncertainties which are independent upon the energy point. It includes the systematical uncertainties due to the consistent selection criteria at all energy points, the trigger efficiency, the reconstruction efficiency of charged tracks, the efficiency corrections as used in the BABAR measurements [22,33], the measurement of the luminosities, and the radiative correction factors. To consider these systematical uncertainties independent upon the energy point, the free normalization factor f α is introduced for each experiment. Here two things should be noted. One is that we do not consider the correlation of various selection conditions. The other is that we assume the form factors satisfy |G E | = |G M | and thus we do not consider the efficiency uncertainty due to this assumption (typically, the efficiency with |G E | = |G M | is 5% − 10% different from that with |G E | = 0 [33]). For the "ψ scan" experiment in which N obs is found to be 0 at some energy points and the background contamination is only 0.6%, it is better to construct the likelihood function assuming that the number of signal events at each energy point abides by the poisson distribution as shown in Eq. 14. where P (N |ν) is the probability of observing N events with the expectation value ν in the poisson distribution, namely, P (N |ν) ≡ ν N e −ν /N !, and f is the free normalization factor. For other baryon octets, the cross section of e + e − → ΛΛ at the peak of the ψ(3770) reported by the CLEO collaboration [11] and that of e + e − → Σ +Σ− /Σ 0Σ0 /Ξ −Ξ+ /Ξ 0Ξ0 reported by the BESIII collaboration [13] are used. As shown in the second column of Table II, these processes share some final particles such as protons, pions and photons. The related systematical uncertainties due to the reconstruction of proton and pion tracks, the second vertex fit, the particle identification, and the detection of the photons are shared. However, Ref. [13] did not report the individual systematical uncertainties. It is impossible to treat them correctly. Fortunately, the limited knowledge of the angular distribution contributes the dominant systematical uncertainty of 9.2% − 10.9%, which depends upon the baryon pairs and should be considered individually. The χ 2 is then constructed as follows. where B denotes the baryon, λ is the expected number of signal events and defined as λ = σL (1 + δ) × B f with B f being the product of the branching fractions of the intermediate-state decays, and (∆λ) tot. is the total uncertainty of the expected number of signal events. IV. FIT RESULTS AND DISCUSSIONS A. Fit to the cross section of e + e − → pp At first, we try the fit in the case of pp. The free parameters are f α , A p , C p and φ. Here, φ is fixed to be 0 for two reasons. One is that we can directly compare the result and that from Ref. [14]. The other is that floating φ leads to negligible difference. Two solutions are found with the same goodness of fit χ 2 /ndof = 25.9/29, where ndof is the number of degree of freedom. The branching fraction of ψ(3770) → pp is found to be either (6.8 +7.1 −2.2 ) × 10 −6 or (2.5 ± 0.1) × 10 −4 . If the process of ψ(3770) → pp is not included, the fit gives χ 2 /ndof = 91.5/31, which means that the statistical significance of both solutions is larger than 5 standard deviations. Our results, summarized in Table III and Table IV, are consistent with those in Ref. [14]. But Ref. [14] does not report the statistical significance of the solutions. Figure 1 shows the cross sections of e + e − → pp from various experiments and the fit. There is an obvious deficit in the vicinity of the ψ(3770). cross section (pb) Including all experiments, the fit results are summarized in Table V and shown in Fig. 2. The goodness of fit is χ 2 /ndof = 43.7/31. From Fig. 2, we find that the line shape shows a dip structure around the ψ(3770) resonance for the final states pp and Σ +Σ− and a bump structure for the final states ΛΛ, Σ 0Σ0 , Ξ −Ξ+ and Ξ 0Ξ0 . cross section (pb) Table V, we find that |A 1 |, |A 2 | << |A 0 |, which means that the SU(3) breaking effect is small. In addition, the upper limits of σ(e + e − → ΛΛ/Σ 0Σ0 /ΛΣ 0 ) at 3.2 − 3.6 GeV from the BABAR measurement [23] are consistent with the predicted cross section from the fit result. Using the parameters from the fit, the branching fractions of ψ(3770) → BB are calculated according to Eq. 3 and listed in Table VI TABLE VI. Branching fraction of ψ(3770) → BB. The first uncertainty is from the global fit and the second uncertainty is due to the assumption that the electric and magnetic form factors are equal. 2.0 ± 0.7 ± 0.1 1. In the analysis above, the relaitons |G E | = |G M | and |F E | = |F M | are assumed. Fits are repeated assuming |G E | = 0 and |F E | = 0 instead, which is indicated from the measurement of the neutron form factors [24]. The branching fraction difference is taken as the systematic uncertainty (the second uncertainty term in Table VI). 2. We find that the two-solution ambiguity of B(ψ(770) → pp) reported in Ref. [14] is fixed with including the measurements about other baryon pairs. This can be clearly shown by comparing the χ 2 curves as a function of the parameter A p using only the cross sections of e + e − → pp and using the cross sections of e + e − → BB (B = p, Λ, Σ + , Σ 0 , Ξ − and Ξ 0 ). The reduced χ 2 curves are illustrated in Fig. 3. The reduced χ 2 is defined as the difference of the χ 2 from the fit with A p fixed and that from the best fit. The blue curve in Fig. 3 indicates that the smaller solution in Ref. [14] gives a better fit including all measurements of σ(e + e − → BB). In the case of assuming |G E | = 0 and |F E | = 0, this conclusion does not change. The curves of the reduced χ 2 as a function of A p using only the cross sections of e + e − → pp (red curve) and using the cross sections of e + e − → BB (blue curve). The reduced χ 2 is defined as the difference of the χ 2 from the fit with A p fixed and that from the best fit. The arrows denote the best fits. The solid and dashed horizon lines denote the 1σ and 2σ regions respectively. 3. The relative phase between the electromagnetic amplitude of the ψ(3770) and the QED amplitude (φ = −54 • ± 230 • ) is consistent with 0 within the uncertainty. Fixing φ = 0 produces negligible effect. The relative phase of the OZIsuppressed strong decay amplitude of the ψ(3770) and the QED amplitude is found to be −137.5 • ± 2.7 • . Many phenominological analyses [29,[38][39][40][41][42][43][44][45] have been performed for various final states in the hadronic decays of J/ψ and ψ(3686). It is revealed that the relative phase φ is close to −90 • . For J/ψ/ψ(3686) → BB, two possible phase values are found using a similar model in Ref. [29]. If the relative phase is assumed to be universal whatever the final state is, the large negative values are favored and close to −90 • for J/ψ and ψ(3686). For the decay mode ψ(3770) → pp, the calculation of Ref. [46] shows that the dominant contribution is the OZI-suppressed amplitude and φ = −113 • . If the contribution of the OZI-allowed DD state as an intermediate state (which is firstly introduced in Ref. [47]) is included, the phase angle φ becomes −99 • . However, our study shows the phase is far from −90 • , which indicates that there may be additional mechanism contributing to the baryon-pair decays of ψ(3770). where β B ψ = 1 − 4M 2 B /M 2 ψ and Γ e (ψ) (B e (ψ)) is the partial width (branching fraction) of ψ → e + e − . Under the assumption above, it is expected Table IX summarizes the measured partial width of these non-DD decay modes and the theoretical predictions. The measured partial width is calculated by multiplying the full decay width by the corresponding measured branching fraction [18]. From Table IX, we find that the potential models proposed in Ref. [48] and Ref. [49][50][51] can explain well the rate of the decay modes with the charmonium final state. These models assume that ψ(3770) and ψ(3686) are the mixture of the 2S and 1D states of the cc system, as shown in Eq. 18 with the mixing angle θ −10 • . However, for exclusive light hadron decay modes, it is difficult to have an accurate theoretical prediction. In this work, the combined branching fraction of ψ(3770) → BB is of the order of 10 −4 by summing up the numbers in Table VI. Though it is much smaller than the non-DD branching fraction of the order of about 10% measured by the BES collaboration [6][7][8], the baryon pairs only account for a small fraction of the light hadron decay modes. Furthermore, this work shows B(ψ(3770) → BB) is at least one order of magnitude larger than that scaled from B(J/ψ/ψ(3686) → BB) as discussed above. This indicates that the mechanism in the light-hadron decays of the ψ(3770) is different from that in the case of the J/ψ or ψ(3686). VI. ACKNOWLEDGEMENTS Li-Gang Xia would like to thank Fang Dai for many helpful discussions.
5,117
2015-08-09T00:00:00.000
[ "Physics" ]
Physical Modelling of Recrystallization and Grain Growth in Steels : Analysis of Free Parameters The mechanical properties of steels are strongly affected by grain size and chemical composition variations. Many industrial developments have been carried out both from the point of view of composition variation and grain size in order to exploit the effect of these variables to improve the mechanical proprieties of steels. It is also evident that recrystallization and grain growth are relevant to the mechanical properties of steels, thus suggesting the necessity of mathematical models able to predict the microstructural evolution after thermo-mechanical cycles. It is therefore of primary importance to study microstructural changes, such as grain size variations of steels during isothermal treatments through the application of a mathematical model, able in general to describe the primary recrystallization and grain growth in metals. This paper deals with the recrystallization and grain growth modelling of steels based on the statistical theory of grain growth originally developed by Lücke1 and here integrated to take into account the effect of recrystallization and Zener drag effect. A general continuity equation is proposed describing in continuous way recrystallization and grain growth phenomena without taking into account textures effect. The effect of input parameters is analyzed. Key-words: Grain growth, Recrystallization, Stainless steel, Computer simulation. Grain growth after primary recrystallization is described as taking place in two different forms, either in form of continuous grain growth ("normal") or in form of discontinuous grain growth ("secondary recrystallization").In the development of this area two stages can be distinguished.In the first stage simple and mostly qualitative interpretations of grain growth have been given.For continuous grain growth, Beck 2 predicted an increase of the average grain diameter with time as t 1/2 which was practically never found.The second stage of the development of this area is characterized by more sophisticated approaches.Hillert 3 transferred the statistical treatment of Ostwald ripening of precipitates to grain growth according to the Lifshitz-Slyozov-Wagner theory [4][5] .Moreover, Hunderi and Ryum 6 introduced a deterministic model considering individual boundaries and describing the change of size of the individual grains by an extremely large set of differential equation (one for each grains) which they solved numerically.Finally,Abbruzzese 1 developed further the Hillert model by calculating a critical radius that was only postulated by Hillert 3 .The main novelty of Abbruzzese study was to use discrete grain size classes which reduce significantly the numbers of differential equations (one for each class) and thus to the possibility to calculate numerically the evolution of the grain size distribution.In recent years the Monte-Carlo simulation was widely used to simulate grain growth including also the case of Zener drag 7 .Hesselbarth and Gobel used, with success, a method named cellular automata in simulation of the theory of Johnson-Mehl-Avrami-Kolmogorov and other successful mesoscale simulations for micro structural evolution including front tracking model 8 , vertex model 9 and phase field model 10 have been developed.While analytical models, such as Abbruzzese and Lücke, predict all the characteristics of micro structural evolution (i.e., grain size and grain size distribution), the goal of mesoscale computational simulations is rather different: to generate snapshots of the evolving microstructure with time.Using the computational version of metallography, both local and ensemble properties of the microstructure may be determined from these snapshots. description of the model A mathematical model able to simulate simultaneously recrystallization and grain growth phenomena is described in this paper.The driving force for primary recrystallization in metals is mainly related to the reduction of the deformation energy (dislocations) introduced by cold working.Heat treatment activates the movement of dislocations and sub-grain boundaries allowing the release of the deformation energy and thus restoring a "dislocation free" microstructure.Under further heat treatment, grain growth activated by boundary energy reduction is the dominant process 7 .In this approach, recrystallization nuclei are considered pre-existing and homogeneously distributed in the deformed microstructure. As far as concerns grain growth, the statistical model, originally developed by Abbruzzese and Lucke 1 , is based on the assumptions of: • Super-position of average grain curvatures in individual grain boundaries.A grain v is characterized by a volume V, is assumed to growth at the expense of a neighbouring grain µ with a rate: ...(1) Here the "radius" R v of a grain v is defined according .S vµ is the area of contact between the two grains v and µ, and m, g and M=2m g represent respectively, the mobility, the tension and the diffusivity of the grain boundary vµ.By taking into account all Nv neighbours of this grain one obtain for all its total growth rate: ... (2) With v=1,2….N G and N G being the total number of grains, this expression represent a system on N G differential equation for the unknown .However, because of the large numbers of grains N G and thus a large number of equation is necessary to obtain a significant simulation, this leads to a great computational difficulties.Therefore, with the second and third simplifying assumptions will be easy to overcome these difficulties: • Homogeneous surroundings of the grains. As a first approximation is assumed that for each grain v the individual neighbourhood of N v individual grain can be replaced by a surrounding obtained by averaging over a neighbourhood of all grains of the same radius R v .Since then all grains of the same radius would have the same surrounding, also their growth rate would be equal.This means that then all grains could be collected in classes characterized by their radius and that the behaviour of only different classes has to be considered, instead of single grains. In the following these classes will be denoted by the indices i, j…N s being the total number of classes.From the mathematical point of view the simplifcation consists in replacing in Equation ( 3) the individual contact area by averaged area , is the total number of grains in class j and A ij is the total area of contact between the two classes i and j.Then, it follow that: ...(3) • A random array of the grains namely the probability of contact among the grains is only depending on their relative surface in the system.In this case the area of a grain of the class i is divided between the neighbouring grains of the class j inproportion to the individual surface area: ...( 4) The integration of all the above assumptions in the model leads to the following ûnal form of the grain growth rate equation: ... (5) Where is again the boundary diffusivity and in our case study m was evaluated according to the Stokes-Einstein relationship 11 : ... (6) Where D is the diffusion coefficient, KB is the Boltzmann constant, DE is the activation energy of the process and T is the annealing temperature.D was chosen proportional to the diffusion coefficient of Fe in Fe-g. According to Zener 7 , particles cause inhibition of grain boundary motion which can be considered as a retarding forces acting homogeneously along the moving boundary and has a similar behaviour of a frictional forces.In order to apply this description to grain growth phenomena has to be taken into account the magnitude of this force PZ.In fact, for a small external driving force P no motion of the boundary takes place.Only if external force surpasses a maximum force I Z0 the particles can exert, the boundary moves, but with a net driving force DP=P-I Z0 .This lead to three ranges for this net force acting on boundary 12 : DP=P-I Z0 for P> I Z0 DP=0 for I Z0 < P< I Z0 ...( 7) DP=P+I Z0 for -I Z0 > P For the maximum Zener force I Z0 the will be used the usual expression where f p is the volume fraction and r p is the mean radius of the particles; b is a proportional constant in the range of 0.75-1. With equation ( 7) and ( 8), one obtains the net force D P ij in the three ranges: Therefor the growth rate for each class i can be derived as: Where: To describe the recrystallization process integrated with the grain growth, it is necessary to propose an extended growth equation that allows to contemporarily and continuously analyse the evolution of free nuclei in the matrix passing through partially impinged grains up to full contact.Was introduce an "influence mean radius" that allow to evaluate the fraction of surface in contact between different grain 13 . The final equation for recrystallization and grain growth can therefore be written as: ... (13) Where G is the shear modulus of the material, b is the Burger vector, ρ is the dislocation density, Dρ=ρd-ρr is the difference between the dislocation densities in the deformed and in the recrystallized material.The dislocation density ρ was considered proportional to the cold reduction rate and the initial numbers of nuclei N is a free parameter of the model that change in relations to reduction rate.The criterion for identifying the critical class is obtained by defining an average influence volume, and consequently an influence radius Rm, calculated as follows: ...( 14) NAPOLI & SCHINO, Orient.J. Phys.Sciences, Vol.2(1), 01-06 (2017) N T is the numbers of grain per cm 3 , F V is the recrystallized volume fraction and viis the volume of the grain of the class and n i is the number of grains of volume v i .R m varies from (3/(4πN T ) 1/3 ) to zero when all the grain are in contact.Then R m parameter define an index i* that discriminates the class over which all the grains are in contact 13 . Annealing temperature effect One of the free input parameters of the model is the annealing temperature deeply influencing the mobility parameters m and consequently affecting the grain size and the recrystallized volume fraction.Four temperatures, ranging from 700°C to 1100°C, have been investigated and reduction rate ε, dislocation density Dρ and the number of nuclei N was maintained constant.The effect of annealing temperature on grain size is reported in Figure 1.Results show that the mean radius size decreases with decreasing temperature because due to a decrease of the mobility parameters from 3.16e-11 erg/cm2 at 1100°C to 3.23e-14 erg/cm2 at 700°C, according to Equation 6. Results also show that a minimum temperature of 700°C is required in order to activate grain growth.Also, the recrystallized volume fraction (Figure 2), reach for the two temperatures 1100°C and 1000°C the unitary value respectively in 13 seconds and 49 second and for the other two temperatures this value is never reached. Reduction rate effect For industrial pur poses and for the statistical model, the reduction rate is one of the most important parameters and in Table 1 the different values used for the computer simulation are shown.The effect of reduction has been exploited by varying the reduction rate ε (at constant temperature equal to 1110°C) jointly to the dislocation density, according to Table 1. The simulation results confirm that if cold reduction is varied from 40% to 80%, mean radius increase about 4%.This could be explained considering that higher values of deformations introduce in the structure more deformation micro-cells (nuclei N) that could increase less their size due to the excessive proximity of neighboring nuclei. Zener drag effect Three different value of Zener parameter has been tested maintaining temperature (T=1100°C), reduction rate (ε=80 %) and dislocation density (Dρ=1x10 11 cm -2 ) fixed.The comparison between the mean radius curve (over time) obtained by grain growth simulation without Zener effect and the other curves are shown in Figure 4.At the highest Zener parameter Iz=1000 a reduction of the maximum coNcluSioN Results from a recrystallization and grain growth model based on statistical assumption have been here discussed.In particular, the effect of the annealing temperature, dislocation density and
2,737
2017-06-25T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
Reusable Components in Knowledge-based Configuration Design Systems Abstract This paper takes a look on how components for knowledge-based intelligent systems can be created for reuse. For this purpose, we use production rules as inspiration for a system that uses an ontology description for the method and the domain ontology for the knowledge about the domain the problem takes place in. In this paper we give a description of an approach that hopefully can give insight into such a system. The approach is based on previous work and other scientific publications concerning this field of study. The created ontology models are in no way guaranteed to be useful outside of this example and the approach itself might still need to be improved in the future. I. INTRODUCTION One if not the most expensive task of implementing an information technology solution to a problem is the development of a software solution [5].In order to cut the expenses associated with software development, the reuse of existing solutions would be preferred. One possibility for reuse of existing solutions would be the reuse of problem solving methods in knowledge-based systems [6].This is often difficult since any problem-solving methods and problem solutions would be closely connected with the problem domain.An approach would have to be introduced that would disconnect domain knowledge and problemsolving methods. This paper strives to provide a possible approach to this problem.By using production rules from an existing knowledge system, we try to create an ontology that describes these rules not only in a way that provides descriptions of all concepts within the rules, but also is reusable.By reusable we mean a description that is as independent as possible from the domain knowledge. II. ANALYSIS OF THE PRODUCTION RULE EXAMPLE AND PREPARATION FOR CONVERSION In order to find an approach we will construct several ontology models from an example of production rules. The example used in this paper is a modification of the rules from the bagger problem.It was originally introduced by Patrick Winston of MIT [1].All the rules are given in Table I. step is bag-small-items there is a small item to be bagged there is a bag that is not yet full bag does not contain bottles put the small item in the bag B12 step is bag-small-items there is a small item to be bagged there is a bag that is not yet full put the small item in the bag B13 step is bag-small-items there is a small item to be bagged start fresh bag B14 step is bag-small-items stop From this description we can extract an ontology that describes the item domain.This ontology holds all the concepts and individuals that describe the items from the shopping list.The rules are similar to rules used in such systems as XCON [3]. But let us first take a look at the rules and their meaning.Rule B1.Since rules and every test within the rules are performed in order, the very first test is for the current active step.In case this test fails, the reasoner can immediately jump to the next rule and safe time this way. Next, the rule wants to know if there is a "bag of potato chips" in the user's order.There are several ways of looking at some information given in these rules.There are at least 2 ways of implementing such a request.The first way is a direct check for the item "chips" not for its class or other property of the individual.The second way is to define "chips" as a subor super-concept of an item.For example, we could implement a concept "bag of potato chips" as a sub-concept of "mediumsized item".This way the rule would check for the existence of any potato chip product from several possible ones while at the same time working with a medium-sized item when needed.However, this would either indicate that all bags of potato chips are only ever medium sized or every item concept would need to be connected to a sub-concept of "bag of chips".The structure of the domain ontology and the connected requirements of the method ontology need to be defined in a matter that allows such tests.For the purposes of this paper we will define the test for a "bag of potato chips" as a direct search for a specific individual in order to explore the required specifications in the method ontology for such a search. The final part of the IF statement of the first rule is the check for a soft drink bottle in the order.This problem is similar to the "bag of chips" one; however, we can see the mentioning of a "Bottle of soft drink" and "Pepsi"; therefore, we can implement this as Pepsi being an individual of the concept "bottle of soft drink" and it, in turn, is a sub-concept of a large item.For this item we will implement a more difficult structure of concepts. The THEN part of the rule adds a specific individual to the order. The next rule B2 exchanges the current step with the next one.The original version of rules like this made an extra step of stopping the execution of the current step before assigning the next. Rule B3 tests to see if the order has large bottles that need to be bagged.From the rules alone it is unclear if "Large Bottle", "Bottle of soft drink" and "Bottle" are the same concepts; a concept hierarchy is any other structure of information.For example, we could have the concept "Bottle" as a sub-concept of "Item".The concept "Large Bottle" is a sub-concept of both "Bottle" and "Large Item".And, in its turn, the concept "Bottle of soft drink" is a sub-concept of "Bottle" or "Large Bottle".Again it is unclear if in this domain bottles of soft drink are always large.If they are not large, there should be additional concepts, such as "Large Bottle of soft drink", "Medium Bottle of soft drink" and "Small Bottle of soft drink".They all would be sub-concepts of "Bottle of soft drink" and the one concepts of either "Small Item", "Medium Item" or "Large Item". This rule also tests if there are 6 large items in the bag in order to determine if there is still room to place a large item. Should the rule fire, the large bottle in question would be placed inside the bag. B4 is a shortened version of B3.Since rules are always fired in order, by the time the reasoner reaches, there will not be any large bottles left in the order and only remaining large items would be needed to be bagged. Rule B5 is interesting since by the time this rule is reached there will be no free space in the bag left.The test for 6 items was performed in the previous rule.This way the rule system can determine when a bag change needs to be performed. When B6 becomes the only available rule, it is clear that the next big step in the bagging process needs to be taken. Rule B7 is the first rule of the next step, which bags medium items.However, in its original wording this first rule was not clearly meant to put items into a shopping bag.Instead it searched for items with the property "Frozen" (or another indicator for this) and put them only into a specialized freezer bag.Unfortunately there were several things unclear with this rule and could be interpreted very differently.If this step is meant to find all frozen items and put them into freezer bags then the test for room inside the shopping bag is meaningless since the item is placed inside the freezer bag and not the shopping bag, and until a rule is hit that will put it into a bag, the active bag can change.If the freezer bag were immediately placed inside the shopping bag, the test for room inside the shopping bag would make sense, however, the test for the item being inside a freezer bag (as it was in its original form) would not.Any item in a freezer bag would already be also inside the shopping bag and would not need to be bagged.For this paper the rule was changed removing the IF element "the medium item is not in a freezer bag" and modifying the THEN element to indicate that the item is put inside a freezer bag and inside the shopping bag at the same time.This modification of the THEN part also ensures that there is only one freezer bag in every shopping bag.This however would mean that implementation of this element would need to tell the system to check if there already is a freezer bag inside the shopping bag and put the frozen item inside that bag. Another thing that needs to be noted is the test for an empty bag or a bag that contains medium items.It is interesting in two ways; first, it contains two tests separated with "or".And, second, it begins with the words "there is a(n)".This wording indicates that this element is not only a test, but it also changes the environment, in which the execution takes place.It seems that if there is a bag that fulfils the criteria of this test, that bag is made to be the current shopping bag that is being filled.If the current bag which was being filled at the time this rule fired did not fulfil the requirement of being empty or containing a medium item, and at the same time there were another bag that fulfilled it, from that moment onwards that bag would be considered the current active bag. The other rules concerning medium items, similar to the rules about the large items, continue to become more and more general until the next step concerning small items needs to become activated.B11 is the first rule concerning small items.It searches for a bag that is not empty.It needs to be noted that "not empty" is different from "containing 6 large items" or "not empty".That makes at least 3 possible tests that can be performed on a bag concerning its fullness.The rule also searches for a bag that does not contain a bottle.However, in combination with rule B12, which puts a small item in any bag that is not full, this means that small items are preferably put in bags with no bottle, but can end up with one, if no empty bag is available.Still in combination with rule B13 a situation can arise that a small item is put in a bag with a bottle, but after that a new bag is started, leaving that small item in an undesired situation with no rule to put it to another bag. Rule B14 ends the execution of the rules switching to the step "stop". III. THE DOMAIN ONTOLOGY From the rule description it made sense to arrange the items around concepts describing the item sizes.It was done this way since many rules address the items in question as "Large Item", "Medium Item" and "Small Item".Only rarely an item was addressed directly or as something else.In the case of the Pepsi item, in order to take a full advantage of all its properties it was made into an instance of "Bottle", "Large Item" and "Soft drink", making it a "Large Bottle of soft drink".Figure 1 shows a graphic representation of the domain ontology.Other items "Pepsi 0.5L" and "Ice Pop" were added to show, that the domain ontology could hold other information that is not necessarily used to solve a given task.Let' us assume that the Fig. 1.Domain ontology store's policy is not to put small frozen products into freezer bags since the shopper might like to enjoy it right away.Therefore the "Ice Pop" will not be put into a freezer bag by the current rules.The property "Frozen" in this ontology is given by using a property with the individual "Frozen" as a target.Not every domain ontology model might use this approach.For example one ontology model might have a literal value "Frozen".Every frozen item would have a property with that literal value as a target.Such problems might be solved by the mapping process if they are considered correctly. IV. IMPLEMENTATION OF THE RULE EXECUTION Depending on the system that is constructed we can have several different implementations of the bagger method [8], [9].They can range from very basic and simple ones to very specific and complex implementations.For example, it would be valid to provide a very simplistic method ontology that simply provides the rules in a concept hierarchy with the rules as individuals.Such an ontology would only describe the rules and execution would be manual or in a system that would access the individuals from the ontology and parse their names. However, in this paper we will try to construct a more complex method ontology that provides all necessary elements and data for direct execution.This study is the continuation of the theme of the previous paper about ontology construction from guidelines [10].The previous study also provided inside into ontology models capable of execution, based on GEM [11] guidelines. V. METHOD ONTOLOGY This is a general description of one possible definition of the method ontology. The method ontology must contain all elements and descriptions for the bagger problem to be executed.The method ontology must have a description of how it works in general.The concept "Method description" can do this.However, depending on how fine a description needs to be provided, it may be better to define several sub-concepts and a structure that is better suited for providing information of a method.The main idea is to have a specific element within the ontology that will describe what information needs to be given in the beginning and what information is returned in the end.The bagger method requires a list of items picked by a user to be provided in the beginning -the order list.In its turn, the method returns a list of bags and their content. Next, the method ontology defines an internal and external part of the execution.In the external part, we can see parts of the structure of the domain ontology.This is needed since several IF elements require tests for specific concepts (Large item, soft drink bottle).Moreover, several individuals are in the method ontology since they are referenced directly by the rules.Besides, having the concept structure of the domain ontology in the method ontology helps finding mapping solutions in cases of new domain ontology models that do not have mapping information, since this is the part that will serve as an interface to the domain ontology after the mapping process. In the internal part, the ontology describes all elements needed for rule execution.First, there is a list of all step values that are used by the rules.In this case they are: check-order, bag-large-items, bag-medium-items and bag-small-items. Next, we have the rule concept and its associated IF and THEN parts.Dissection of a rule individual can be seen in Table II.Also we can extract from the tasks in the IF and THEN parts some useful sub-functions that can be called recurrently rather than having to give the same description of actions for several rules. The temporal part is a special part of this ontology.During execution, individuals contained in it will change several times. In this example, a function is a hard-coded set of static activities and does not need inputs.However, every function that is connected via an "IF_x" property has a Boolean operator output, for every such function has to be true for the rule to be true.Some IF functions and every THEN function affect the state of a working-ontology.This working-ontology holds the information necessary to describe the change during the execution of the rules. It is necessary to note, that the IF and THEN properties of rule individuals have to be numbered, for the order of activating them can have an effect on how a rule is determined to behave. One thing that is not given in the picture above is the element that describes the activity of every function.This can be given in the ontology as a property value or the individual itself gives a link to the resource that describes the action. A sub-function is a simpler and frequently used function.It is used for actions that can be generalized and, therefore, save the number of definitions that need to be given for the execution of actions.A the example of sub-function' property is given in Table III.Functions that are described in the IF part of the ontology, the functions of the THEN concept and sub-functions use other elements given in the method ontology.For example, the IF function "step is bag-large-items" has to have a reference to the individual "bag-large-item" of the "Step" concept.This makes it clear how this function operates and does not have to rely solely on its personal definition.However, since tests for the current step are common in this method it also uses a subfunction -"is step" sub-function.This sub-function receives the individual "bag-large-items" as an input from the "step is bag-large-item" function.In order to test whether or not this individual is in fact the current step it needs to be connected to the "current step" individual of the concept "current variable".Having access to this variable the sub-function can examine it for its current connection to a "step" individual.If they are equal to the sub-function and the main function, both hold true. 70 The temporal part of the method ontology contains individuals that will change properties during execution.Also new individuals will be created.Let us take a closer look at it. VI. TEMPORAL ELEMENTS Here we can see the ontology that describes the current state of the bagging algorithm.It will be either part of the method ontology and actively used in it or a separate instance of this ontology can be created.The "Active element" concept and "Current Step" individual will be used in order to point to the current step, which is being performed.In order to use this ontology, it will always have to be liked in some way to the method ontology. The concept "Bag" describes any bag that is used in the bagging problem.Another "Active element" concept, that is "Current Bag", is linked to the bag that is being used by the algorithm at the time.During execution, the property of the individual "Current Bag", which points to one of the concepts "Bag", will change several times. The concept "Freezer Bag" holds individuals of any freezer bag used during the bagging process.The concept "Cart" (or order) holds only one individual.This individual describes the current cart or order of the user.This individual needs to be linked with the items described in the domain ontology.During execution, the cart will lose items by giving them to bags.By the end the user's cart or order will be empty. In order to operate with several items, the properties of bags and carts will have to hold a numeric value, which indicates the number of the same item they hold, for example, "Contains 2".A more specific graphical representation of the temporal ontology is given in Fig. 3. VII. USING THE METHOD ONTOLOGY If we desire that the method ontology is used as a set of instructions that describe specific actions of a system, we must define a way that would make that possible. One possible way would be to let the structural representation of the ontology speak for itself.A user or sufficiently intelligent software agent could understand the described actions from the element names alone.Another solution would be to give every function element an additional description for the actions that need to be taken. Also some sort of language could be introduced in order to make it machine readable and executable.For example, the sub-function "set step" could have an additional description that would state: Set (target at property "uses_1") to (target at property "uses_2"); In such a language the system would have to know the commands "Set" and "to", and understand what other individuals are referenced.This way it would be possible to introduce system specific commands that would carry out the required actions.Using a system that provides the possibility for plug-in development that has a component-based ontology [2], [4], usage in mind would be recommended. VIII. CONCLUSION In this paper we described one possible way of creating and implementing a reusable method ontology that fulfils the bagger algorithm.Reusability would arise from the possibility of mapping the method ontology, which describes the actions, to a new domain ontology model [7].Some aspects of this approach need to be tested further.In the provided example a specific hierarchy of concepts and even some individuals were given in the method ontology.How will the mapping process be done, when a new domain ontology model is mapped to this method ontology?Solving the problem of concept names not being the same would be easy enough, but what would happen if the structure were not the same.However it seems that in the case of the bagger, successful execution would be possible with a strange domain ontology, as long as only generic items are used and none of the specific cases happen.There also needs to be a more specific description of the reasoning system and how it operates with the ontology models. TABLE III DEFINITION OF SUB-FUNCTION "STEP IS"
4,919.6
2013-12-01T00:00:00.000
[ "Computer Science", "Engineering" ]
PNNGS, a multi-convolutional parallel neural network for genomic selection Genomic selection (GS) can accomplish breeding faster than phenotypic selection. Improving prediction accuracy is the key to promoting GS. To improve the GS prediction accuracy and stability, we introduce parallel convolution to deep learning for GS and call it a parallel neural network for genomic selection (PNNGS). In PNNGS, information passes through convolutions of different kernel sizes in parallel. The convolutions in each branch are connected with residuals. Four different Lp loss functions train PNNGS. Through experiments, the optimal number of parallel paths for rice, sunflower, wheat, and maize is found to be 4, 6, 4, and 3, respectively. Phenotype prediction is performed on 24 cases through ridge-regression best linear unbiased prediction (RRBLUP), random forests (RF), support vector regression (SVR), deep neural network genomic prediction (DNNGP), and PNNGS. Serial DNNGP and parallel PNNGS outperform the other three algorithms. On average, PNNGS prediction accuracy is 0.031 larger than DNNGP prediction accuracy, indicating that parallelism can improve the GS model. Plants are divided into clusters through principal component analysis (PCA) and K-means clustering algorithms. The sample sizes of different clusters vary greatly, indicating that this is unbalanced data. Through stratified sampling, the prediction stability and accuracy of PNNGS are improved. When the training samples are reduced in small clusters, the prediction accuracy of PNNGS decreases significantly. Increasing the sample size of small clusters is critical to improving the prediction accuracy of GS. Introduction In recent years, the yield growth rate of rice [Oryza sativa L.] and maize [Zea mays L.] has gradually slowed (Yu et al., 2022;Tian et al., 2021).Phenotypic selection (PS) identifies the best individuals based on phenotypic values estimated from performance in evaluation trials.It requires a long period and may take many years to obtain plants with the desired resistance (Bandillo et al., 2023).In tea variety breeding, PS takes more than 16 years.A tea breeding program to meet commercial requirements could take more than 40 years (Lubanga et al., 2023).Genomic selection (GS) is a breeding method based on high-density molecular markers (McGowan et al., 2021).GS estimates individual breeding values through phenotypes and single nucleotide polymorphisms (SNPs).Seedlings are selected based on their breeding value to shorten the generation interval and speed up the breeding process (Cappetta et al., 2020).GS improves the breeding selection accuracy and saves costs (Beyene et al., 2021).GS has accurate prediction results for complex traits with low heritability (Bhat et al., 2016;Merrick and Carter, 2021).Genome technology has also been implemented to guide breeding practices (Jannink et al., 2010).GS provides new opportunities for establishing wheat [Triticum aestivum L.] hybrid breeding programs (Zhao et al., 2015).In GS breeding, it is necessary to construct a training population (TP) (Somo et al., 2020).We obtain high-quality phenotypes through precise measurements.A genotype-to-phenotype prediction model is established based on the TP's phenotype and genotype (Karlsen et al., 2023;van Hilten et al., 2021;Danilevicz et al., 2022).Finally, the genomic estimated breeding value (GEBV) of the predictive group (PG) is calculated through the statistical model (Melnikova et al., 2021).Each PG is evaluated and utilized according to its GEBV (Park et al., 2020).GS has a selective advantage over PS in soybean yield.However, GS reduces genetic diversity (Bandillo et al., 2023). An early algorithm applied to GS was the best linear-unbiased prediction (BLUP).Subsequently, various algorithms were developed based on BLUP.Genomic best linear-unbiased prediction (GBLUP) assumes that all marker effects have equal variance (Ren et al., 2021).The ridge-regression best linear unbiased prediction (RRBLUP) GS model combines all marker information to predict GEBVs while implementing a penalty function to limit the additive contribution of each marker.Penalties apply equally to all markers for small and large effect genomic components (Rice and Lipka, 2019).When we assume the variance of the marker effect is some prior distribution, the model becomes a Bayesian approach.Currently, Bayesian methods have been developed into Bayesian A, Bayesian B, Bayesian Cp, Bayesian LASSO, and Bayesian ridge regression (Desta and Ortiz, 2014).Bayesian B outperforms GBLUP as the number of quantitative trait loci decreases (Daetwyler et al., 2010).BLUP mainly considers the additive effects of multiple genes, and does not consider dominant effects and interaction effects.For complex agronomic traits, the BLUP prediction accuracy is less than 0.5. To further improve the accuracy of phenotype prediction, machine learning is introduced into GS.Machine learning is a data learning algorithm that does not rely on rule design.It processes large amounts of historical data and autonomously identifies patterns in the data.In a comparative study, the phenotypic prediction accuracy of random forest (RF), stochastic gradient boosting (SGB), and support vector machines (SVMs) were all around 0.5 (Ogutu et al., 2011).Machine learning algorithms are generally more complex than linear algorithms.However, they have higher prediction accuracy (Gonzaĺez-Camacho et al., 2018). In recent years, deep learning (DL) has achieved great success in natural language processing, image recognition, and content generation (Otter et al., 2020;Li, 2022;Liu et al., 2021).With the introduction of artificial intelligence into the scientific field, many important discoveries have been made (Wang et al., 2023a).The Alphafold2 paper presented a DL calculation method for the first time to predict protein structures with atomic precision (Jumper et al., 2021).Liu and Wang (2017) proposed a DL model for predicting GEBV using convolutional neural networks (CNNs).The prediction accuracy of their DL model is greater than that of RRBLUP, Bayesian LASSO, and BayesA models.One study compared the genomic prediction accuracy of GBLUP, light gradient boosting machine (LightGBM), support vector regression (SVR), and DL (Wang et al., 2023b).The above research results show that deep neural network genomic prediction (DNNGP) outperforms most existing GS algorithms.The deep learning model performed better than the Bayesian and RRBLUP GS models, regardless of the wheat dataset size (Sandhu et al., 2021). SoyDNGP is a DL model for soybean trait prediction (Gao et al., 2023).It accurately predicts complex traits and shows robust performance across different sample sizes and trait complexity.Transformer-based GPformer is robust and stable to hyperparameters and can generalize to multiple species (Wu et al., 2024).Montesinos-López et al. (2021) reviewed the application of DL methods in GS and summarized the pros and cons of DL methods.The main pros of DL include: (1) DL models can capture non-additive effects and complex interactions among genes; (2) DL models can effectively handle multimodal data; (3) The DL architecture is very flexible and contains various modules.DL methods in GS have some defects: (1) DL is a black-box model and is not helpful for inference and association studies; (2) These models are more prone to overfitting than traditional statistical models; (3) Proper DL models require a very complex tuning process that relies on many hyperparameters.In general, deep learning algorithms are able to capture nonlinear patterns more effectively than traditional linear algorithms. Through continuous research, DL for GS has revealed its advantages over other machine learning.DNNGP has the advantages of a simple model, high phenotypic prediction accuracy, and wide species adaptability.However, DNNGP requires hyperparameter tuning for each phenotype to achieve optimal performance.DNNGP is based on CNN, and the convolution kernel size is its crucial parameter.Since DNNGP is a serial structure, there can only be one kind of convolution kernel at one position.The serial structure causes an "information bottleneck", and much information is lost in this calculation step (Tishby and Zaslavsky, 2015).Determining the convolution kernel size is a time-consuming and computationally intensive task.Convolutional synchronization has succeeded in the image field (Szegedy et al., 2015).It enables deeper neural networks and higher model prediction accuracy without time-consuming hyperparameter tuning.The data were simultaneously convolved with 1×1, 3×3, 5×5, and 7×7 convolutions to minimize information loss.The convolutional parallel structure increases the "width" of the model. Different phenotypes have different prediction difficulties.Multiple studies show that the prediction accuracy of simple traits does not exceed 0.8 (Heffner et al., 2011;Heslot et al., 2012).The prediction accuracy of complex agronomic traits remains around 0.3.In many phenotype predictions, the prediction accuracy of DNNGP does not reach 0.8.This paper introduces convolution parallel technology into GS and adjusts it to adapt to onedimensional convolution.This GS method is named parallel neural network for genomic selection (PNNGS).We develop PNNGS to improve the GS prediction accuracy further.To increase the stability of predictions, we introduce clustering algorithms and stratified sampling.The network architecture of PNNGS is similar to that of DNNGP, in which the convolutional layer is changed to a parallel convolutional layer.Each convolution branch has a different convolution kernel.To reduce the overfitting of PNNGS, we introduce residuals on each branch.We train PNNGS with four different loss functions.In the trait prediction of rice, sunflower [Helianthus annuus L.], wheat, and maize, the prediction accuracy of PNNGS is significantly greater than that of DNNGP, demonstrating convolutional parallelization's effectiveness in GS.PNNGS can automatically obtain the optimal convolution size when simultaneously passing through convolution kernels of multiple sizes.It significantly reduces hyperparameter tuning effort.The prediction accuracy of PNNGS for most phenotypes is close to or exceeds 0.8, which meets the needs of practical applications.Through clustering algorithms, the plants are divided into different clusters.We find that wheat is an imbalanced dataset.Plants located in small clusters reduce the prediction accuracy of the phenotype.Reducing data imbalance is an important method to improve GS prediction accuracy. Plant materials In this paper four public plant datasets have been analyzed.These datasets contain gene files and phenotype files.The corresponding plant phenotype is predicted through genomic data. Rice44k dataset The Rice44k dataset comprises 413 inbred rice accessions collected from 82 countries (Zhao et al., 2011).These rice varieties were measured by 44k chips, and 36,901 SNP variants were obtained.Minor-allele frequency (MAF), missing call rate (MCR), and heterozygosity are three indicators for filtering sites in the literature (Thongda et al., 2020).Typical filter conditions are MAF > 0.05, MCR< 0.2, and heterozygosity< 0.05 (Zhang et al., 2022), which can filter out more than half of low-quality sites.Other thresholds for filtering sites, such as MAF > 0.01 or MAF > 0.1, have been applied in the literature (Backman et al., 2021;Liao et al., 2017).We filter rice SNP sites according to MAF > 0.05 and MCR< 0.2, and 33,163 SNPs are retained in the gene file.There are 34 phenotypes included in the Rice44k dataset.In this paper, we will investigate six of these phenotypes: flag leaf length (FLL), leaf pubescence (LP), panicle length (PL), plant height (PH), seed number per panicle (SNPP), and seed surface area (SSA). Sunflower1500k dataset Marco Todesco et al. (2020) resequenced 1,506 wild sunflower strains from 3 species (Helianthus annuus, Helianthus petiolaris and Helianthus argophyllus).We only researched 614 samples of Helianthus annuus.Sunflower1500k is a large dataset containing 15,697,385 SNP sites and 87 traits.The number of SNPs filtered by MAF > 0.05 and MCR< 20% sites was 7,902,178.We randomly selected 30,179 sites and conducted the following research based on this gene file.This paper focuses on six traits, namely, flower head diameter (FHD), leaf perimeter (LPE), primary branches (PB), stem color (SC), stem diameter at flowering (SDF), and total RGB (TR).To distinguish it from leaf pubescence, the abbreviation of leaf perimeter is LPE. Wheat33k dataset The Wheat33k dataset contains 2000 Iranian bread wheat landraces from the CIMMYT wheat gene bank (Crossa et al., 2016).It is a dataset with a relatively large sample size.Wheat33k contains 33709 markers and 8 phenotypes.Due to the high quality of the loci, we did not filter the gene files.Grain hardness (GH), grain length (GL), grain protein (GP), grain width (GW), thousandkernel weight (TKW), and test weight (TW) are the six phenotypes focused on in this paper.The original literature describes the heritability of these six phenotypes as 0.839, 0.881, 0.625, 0.848, 0.833, and 0.754. Maize50k dataset The Maize50k dataset contains genotype data from 282 inbred association panels (Cook et al., 2012).After discarding some nonconvertible sites, Maizi50k contained 50925 SNP sites.Through the same filter criteria, the number of SNPs is 45562.The phenotype file contains 285 trait/environment combinations for 57 traits collected between 2006 and 2009.The genotypic and phenotypic data are obtained from Panzea datasets.The phenotype file name of Maizi50k is maize282NAM-15-130212.We download it from the Internet (http://cbsusrv04.tc.cornell.edu/users/panzea/download.aspx?filegroupid=9).The phenotype file contains 16 environments.Days to silk (DS) is a phenotype with six environments.Detailed descriptions of these environments are provided in Supplementary Table S1.The codes for the six maize environments are 06CL1, 065, 26M3, 07CL1, 07A, and 06PR.Through the Maize50k dataset, we study the PNNGS performance in predicting multi-environment phenotypes. PNNGS architecture The plant breeding values in GS are estimated by thousands or tens of thousands of SNP sites distributed throughout the genetic material.The first step is to collect plant genome sequences and phenotypes (Figure 1A).Since the collected phenotypes often have some flaws, data cleaning is required.Typical data cleaning includes removing outliers, imputing missing data, and discarding plants.The genotypes of diploid plants are divided into three types: homozygous dominant, heterozygous, and homozygous recessive, which are typically coded as 0, 1, and 2 (Lippert et al., 2011).Each wheat allele is recorded as 1 (present) or 0 (absent).These genotype encoding methods are adopted in this paper.The rows and columns of the input matrix are samples and SNPs, respectively.The number of samples in current GS application scenarios is generally several hundred (Hickey et al., 2017).With the advancement of sequencing technology, SNP sequencing length has reached millions or even tens of millions.The number of SNPs is four to five orders of magnitude greater than the number of samples.The situation mentioned above is the famous "p>>n" problem in the GS field, where p represents the number of SNPs and n represents the number of samples (Yan and Wang, 2023). The right side of Figure 1A shows the architecture of PNNGS.PNNGS consists of a parallel module, a dropout layer, a batch normalization layer, a parallel module, a dropout layer, and a linear layer in sequence.The dropout layer can reduce model overfitting and alleviate the "p>>n" problem, and the dropout rate is set to 0.5.A large dropout rate can effectively resist overfitting.However, it requires the DL architecture to be quite robust.The batch normalization (BN) layer speeds up network training and convergence.It controls gradient explosion, prevents gradient vanishing, and prevents overfitting.Modern neural networks generally add BN to improve performance (Ioffe and Szegedy, 2015).The data needs to be flattened before entering the linear layer.The output of the linear layer is the prediction of the plant phenotype.PNNGS realizes the transformation from plant genotype to phenotype. The parallel module contains multiple parallel residual convolutions, which is the main innovation of this paper (Figure 1B).The kernel sizes of the first and second paths are 1 and 3, respectively.The kernel size of the n th path is 2n-1.Then, the calculation results of all parallel branches are concatenated.To increase the nonlinear representation capability of PNNGS, we pass the data through the rectified linear activation (Relu) layer and obtain the output.In image convolution, we generally use two layers of 3×3 convolutions instead of one layer of 5×5 convolution.However, this technique does not work here.In one-dimensional convolution, the convolution parameters are proportional to the kernel size.We can directly operate using large kernel-size convolutions.The calculation process of the parallel module can be expressed through a mathematical formula: where x is the input and y is the output.Function f is a onedimensional residual convolution operation.Its second parameter is the kernel size.The convolution outputs are concatenated in the channel dimension.Relu is the most frequently used activation function in deep learning models.If the function receives any negative input, it returns 0. However, for any positive value, it returns itself. The optimizer for PNNGS is Adam.The learning rate and weight decay are set to 0.001 and 0.1, respectively.Weight decay is essentially an L2 regularization coefficient.Training PNNGS is challenging since the loss function cannot be set to the Pearson correlation coefficient.The Lp loss function is the most popular in the field of machine learning.p is a parameter that adjusts the sensitivity to outliers.When p is small, the model is robust.Conversely, it makes better predictions for outliers.Common Lp loss functions in the literature are L1 and L2.The L1 and L2 losses are the mean absolute error (MAE) and mean squared error (MSE), respectively.We train PNNGS based on L05, L1, L2, and L3 loss functions (Figure 1E).L05 refers to the Lp loss function with p = 0.5. The sample size is n.y i , true and y i , pred are the true phenotype and predicted phenotype of the i th individual, respectively.We tried adding more layers to PNNGS, which only reduced the Lp loss but not the Pearson correlation coefficient. PNNGS is a fairly small model for the popular large models.More details about PNNGS are available in the source program. Four other GS models Four GS models (RRBLUP, RF, SVR, and DNNGP) have been compared with PNNGS for phenotypic prediction accuracy.These four GS models have the characteristics of simple principle, stable performance, and wide application. RRBLUP model The RRBLUP model is based on the best linear unbiased prediction model (Rice and Lipka, 2019).The BLUP model is described as follows: where m is the phenotypic mean.x ik is the genotype of the k th site of the i th individual.q represents the number of SNP sites.b k is the estimated random additive SNP site effect at the k th site, and e i is the residual error term.y i is the phenotype of the i th individual. The loss function of RRBLUP is: k is the ridge regression penalty, which reduces the value range of b k .l is a hyperparameter that controls the intensity of the penalty.Our goal is to minimize the loss. The BLUP with penalty term is RRBLUP.Compared with BLUP, the stability and prediction accuracy of RRBULP are improved simultaneously. RF model RF is a popular supervised machine learning method for classification and regression.It combines the predictions of multiple decision trees into a single overall prediction (Annicchiarico et al., 2015).Training a random forest means training each decision tree independently.The principle of RF is that the variance of each decision tree will help avoid overfitting.It is easy to overfit when training a single decision tree on the entire training set.Random forest regression (RFR) is an ensemble learning method.RFR is widely used in the GS field, and its prediction accuracy and generalization are competitive (Blondel et al., 2015). SVR model SVR is a machine-learning technique for regression tasks.It is a variant of SVM designed to predict continuous values, making it suitable for quantitative trait prediction.SVR identifies the "margin" around the predicted regression line.Its goal is to fit a straight line within this margin while minimizing the prediction error.SVR is robust to outliers because it focuses primarily on data points near the edges instead of relying heavily on all data points (Üstün et al., 2005).It is beneficial in dealing with nonlinear relationships and can be adapted to various problem domains by selecting kernel functions (Wu et al., 2009).In wheat GS, a nonlinear RBF kernel is an optimal choice for SVR (Long et al., 2011). DNNGP model DNNGP is a recent deep-learning algorithm for GS.It clarifies that BN, early stopping, and Relu are three effective techniques for GS.The architecture of DNNGP is simple yet effective, as it balances sample size and network depth well.It is the first deep-learning algorithm that clearly outperforms LightGBM and SVR in the GS domain.DNNGP contains three CNN layers, one BN layer, and two dropout layers.It is serial and has no branches.Compared with DNNGP, PNNGS chooses to increase the width of the network instead of the depth.Since the sample size is only a few hundred, neural networks with more than five layers are prone to overfitting (Zou et al., 2019). Evaluation criteria The Pearson correlation coefficient is applied as the evaluation criterion for the GS model.The Pearson correlation coefficient ranges from -1 to 1.In most cases, its value ranges from 0 to 1 in the GS model.The GS model makes a perfect prediction when the Pearson correlation coefficient is 1.When the Pearson correlation coefficient is 0, the phenotype predicted by the GS model is linearly independent of the observed phenotype.Since the value range of the Pearson correlation coefficient is fixed, it is easy to compare the performance of the GS model on different phenotypes.Compared with MAE and MSE, the Pearson correlation coefficient is a more appropriate GS evaluation criterion.When the GS model predicts the phenotype of all plants as an average, MAE and MSE still give an excellent score for this predictive measure.In this case, the Pearson correlation coefficient is 0, indicating that the GS model for predicting the phenotypic mean is the worst.The Pearson correlation coefficient is the most popular evaluation criterion in the GS field (Akdemir et al., 2015). The normalized root mean square error (NRMSE) is calculated as the root mean square error divided by the range of the observations, expressed as a percentage.The range of the observations is the difference between the maximum and minimum values of the observed data.The value range of NRMSE is [0, +inf).NRMSE is our secondary evaluation criterion. Phenotype distribution pattern To predict phenotypes more accurately, we need to analyze phenotypic distributions.The correlations among the six rice phenotypes are relatively weak.The Pearson correlation coefficients between most phenotypes are less than 0.3, meaning they are linearly independent (Figure 2A).The Pearson correlation coefficient between PH and PL is 0.594.PH and PL increase with increasing mid-season temperature, which results in a strong correlation between them (Kovi et al., 2011).The prediction accuracy of GS for highly correlated phenotypes is similar.The Pearson correlation coefficient between FLL and PL is 0.55, indicating a positive correlation.Flag leaf plays a vital role in providing photosynthetic products to grains.Plants with long FLL elongate PL, resulting in increased grain number per ear (Rahman et al., 2013).When PL increases, PH and FLL increase with considerable probability. The Pearson correlation coefficients between most sunflower phenotypes are less than 0.3 (Figure 2B).There is a strong positive correlation between PB and SDF, as ethrel can increase both PB and SDF (Kumar et al., 2010).TR is a unique phenotype because it is negatively correlated with other phenotypes.TR is defined as the sum of RGB values of leaf color.TR increases significantly when the red channel signal of leaves is enhanced (Chen et al., 2020).In rice, FLL, flag leaf length; LP, leaf pubescence; PL, panicle length; PH, plant height; SNPP, seed number per panicle; SSA, seed surface area.In sunflower, FDD, flower head diameter; LPE, leaf perimeter; PB, primary branches; SC, stem color; SDF, stem diameter at flowering; TR, total RGB.In wheat, GH, grain hardness; GL, grain length; GP, grain protein; GW, grain width; TKW, thousand-kernel weight; TW, test weight.06PR, 07A, 07CL1, 26M3, 065, and 06CL1 are the codes for the six maize environments. Therefore, a significant TR indicates yellow leaves and a small TR means green leaves.If TR is significant, the photosynthesis efficiency of the leaves will be low, and the plant growth will be poor.TR has the strongest negative correlation with PB. The Pearson correlation coefficients between wheat phenotypes are mostly less than 0, indicating that most phenotypes are negatively correlated (Figure 2C).Wide grains reduce grain hardness, which is consistent with the mechanical properties of the material.Therefore, the Pearson correlation coefficient between GH and GW (-0.407) is much less than 0. GL and TKW are significantly positively correlated, and their Pearson correlation coefficient is 0.568.GP is negatively correlated with GW, TKW, and TW, indicating that there is a contradiction between wheat yield and grain protein content.The bad news is that it is difficult to obtain wheat varieties that are both high in yield and high in protein.The Pearson correlation coefficient between GW and TKW is 0.762, which indicates that the key to increasing wheat yield is to increase grain width. Different from rice, sunflower, and wheat, we calculate the Pearson correlation coefficient of the same phenotype in maize under different environments.The correlation coefficients between phenotypes are all greater than 0.7 (Figure 2D).The Pearson correlation coefficient between 06CL1 and 07CL1 reaches 0.96, which means that we can predict the DS in 07CL1 through the DS in 06CL1.The difference between 06CL1 and 07CL1 is minimal, and their difference is one year in planting date.The correlation coefficients between the DS in 06PR and the DS in other environments range from 0.7 to 0.8.06PR is the only winter environment among the six environments, and the other five are all summer.The above analysis results show that the correlation between the same phenotype in different environments is much more significant than the correlation between different phenotypes in the same environment.Since the correlation coefficients between different phenotypes are small, genomic prediction is required for each phenotype. There are 570 sunflowers with both PB and TR.Their distribution is shown in Figure 2E.PB has a long-tailed, rightskewed distribution.The maximum, minimum, mean, and standard deviation of PB is 85, 6, 23.1, and 11.4,respectively.TR approximately satisfies the normal distribution, and its maximum, minimum, mean, and standard deviation values are 218.7,106.7, 174.2, and 15.9, respectively.Compared with PB, TR has a larger span.However, TR is more concentrated.The coefficient of variation is the ratio of the standard deviation to the mean and is a measure of the dispersion of a data set.The variation coefficients of PB and TR are 0.49 (= 11.4/23.1)and 0.09 (= 15.9/174.2) respectively.A small coefficient of variation means that the data are compact.Therefore, PB is more dispersed than TR. Selection of the number of parallelism The computing platform is Intel(R) i7-8700 CPU, RTX 3090 GPU, 32 GB RAM, and Windows 10.PNNGS and DNNGP are implemented through torch 1.7.RRBLUP, RF, and SVR are implemented based on scikit-learn 1.3.All calculations are performed in Python, and the source code is open. For PNNGS, determining the number of parallelism is the primary task.The number of parallelism is a hyperparameter, and its specific value is not presented in the PNNGS architecture.We need to do a grid search experiment to determine the number of parallelism, and the experimental results are presented in Table 1.The number of parallels could be 2, 3, 4, 5, 6, 7, and 8.The experimental phenotypes are SNPP in rice, FHD in sunflower, GH in wheat, and DS_065 in maize.For SNPP, the prediction accuracy first increases and then decreases with the increase of parallelism.When the parallel number is 4, the prediction accuracy of SNPP is the highest, which is 0.664.If the parallel number is inappropriate, the phenotype prediction accuracy will drop by 0.014.FHD, GH, and DS_065 show similar change patterns.The optimal parallel numbers for FHD, GH, and DS_065 are 6, 4, and 3, respectively. In the following phenotypic predictions, the parallel numbers of rice, sunflower, wheat, and maize phenotypes are 4, 6, 4, and 3, respectively.We did not perform a grid search for each phenotype.Due to the stochastic nature of neural networks, the calculation results of PNNGS may fluctuate slightly.In repeated calculations, the optimal parallel number of PNNGS may be slightly different from Table 1.However, it has little impact on the prediction results. PNNGS prediction accuracy for phenotypes We utilized PNNGS to predict previously analyzed phenotypes.The rice, sunflower, and wheat phenotypes were designed to test the predictive ability of PNNGS for different phenotypes.Six environmental phenotypes of maize were predicted to obtain the performance of PNNGS under different environments.To reduce the impact of dataset partitioning, we introduce ten-fold crossvalidation in this study.The average of the ten Pearson correlation coefficients for the phenotype is regarded as the final prediction accuracy of PNNGS for the phenotype.Through NRMSE, we know the difference between the predicted value and the true value.Therefore, NRMSE also evaluates the effect of model prediction.We simultaneously applied RRBLUP, RF, SVR, and DNNGP to predict these phenotypes and compare their prediction ability with PNNGS.In FLL, LP, PL, PH, SNPP, and SSA predictions, RRBLUP, RF, and SVR are competitive (Figure 3A).The prediction accuracy of DNNGP is greater than or equal to that of RRBLUP, RF, and SVR.DNNGP can obtain robust phenotype predictions on different datasets.Compared with serial DNNGP, parallel PNNGS achieves higher prediction accuracy.Among the six phenotypes, the prediction accuracy of PNNGS was higher than that of DNNGP by 0.04, 0.02, 0.04, 0.03, 0.04, and 0.02.As the prediction accuracy of DNNGP increases, the prediction accuracy improvement of PNNGS decreases.PNNGS can significantly improve the makes it convenient to display the results.Intuitively, the data forms three clusters.K-means clustering is introduced, and its n-clusters are set to 3. Each plant can be classified into Cluster 1, Cluster 2, and Cluster 3 (Supplementary Table S2).The number of plants in Cluster 1, Cluster 2, and Cluster 3 are 251, 1025, and 724, respectively.There are a total of 2000 (= 251 + 1025 + 724) wheat plants with GL.The centroid coordinates of Cluster1,Cluster2,and Cluster3 are (16.8,23.1),(-18.0,-0.8), and (19.5, -6.9), respectively.For the same dataset, we divided the dataset into 100 clusters through the same method (Supplementary Table S3).On average, each cluster has 20 samples.Since there are too many categories, it is not convenient to display them in figures. The previous calculations are all based on random sampling.Along with clustering, stratified sampling is introduced.We again predict GL through stratified sampling and ten-fold crossvalidation (Figure 5C).According to the results in Figure 5A and Figure 5C, the fitted normal distribution becomes more peaked as the number of clusters increases.It indicates that the standard deviation of the prediction results is gradually When the number of clusters is 3 and 100, the standard deviations of the prediction results are 0.029 and 0.024, respectively.The standard deviations decrease by 0.003 (= 0.032-0.029)and 0.008 (= 0.032-0.024)respectively.The prediction stability of PNNGS is significantly improved at different folds.Another notable improvement is the increase in GL prediction accuracy.When the number of clusters is 1, 3, and 100, the GL prediction accuracy is 0.759, 0.760, and 0.768, respectively. In Figure 5B, we marked a tiny cluster of closely spaced samples with dashed lines.We can only accurately predict the samples in this tiny cluster based on other samples in this tiny cluster.In random sampling, there is no guarantee that samples in tiny clusters will appear in the training set.In this case, the prediction accuracy is low, and the fluctuation is large.If we divide the data into 100 clusters and perform stratified sampling, some samples in the dashed line will definitely be in the training set.The quality of the training set data is improved.The phenotypic prediction accuracy became stable among different folds. Since the training set is randomly sampled rather than stratified, it causes large fluctuations in prediction accuracy at different folds.Stratified sampling should not be performed based on phenotype.We need to perform PCA on the genomic data first.Plants are clustered through a clustering algorithm.Stratified sampling according to categories can reduce fluctuations in prediction accuracy, allowing us to evaluate the model more objectively.The main factor currently plaguing the application of GS is its low prediction accuracy.The introduction of deep learning improves the GS prediction accuracy.However, the prediction accuracy of complex traits still cannot meet the needs of practical agricultural applications.Insufficient sample size is the most important factor restricting the further improvement of deep learning for GS.Insufficient samples destroy the identical distribution of training and test sets.High-quality samples are the key to solving all the above problems. We designed four schemes to establish PNNGS (Figure 6A).The prediction object is wheat GL.In Scheme A, all data were subjected to stratified sampling and ten-fold cross-validation.In scheme B, we randomly select 200 samples from Cluster 1 as an additional test set.Stratified sampling and ten-fold cross-validation were performed on the remaining samples.These additional test sets are also used to test the prediction accuracy of the algorithm.The specific calculation details are in our code.In Schemes C and D, 200 samples are randomly selected from Cluster 2 and Cluster 3, respectively, as additional test sets.The purpose of this experiment is to detect the importance of samples in different clusters for phenotype prediction. The calculation results are presented in Figure 6B.The phenotypic prediction accuracies of Scheme A, B, C, and D are 0.760, 0.661, 0.754, and 0.751, respectively.Their standard deviations are 0.029, 0.033, 0.023, and 0.017 respectively.Undoubtedly, the phenotypic prediction accuracy in Scheme A is the highest.The calculation results of Schemes C and D are close.Scheme B is the worst in terms of both prediction accuracy and prediction stability.If Cluster 1 is reduced by 200 samples, its sample size will only be 51.PNNGS cannot adequately train on Cluster 1 samples.Therefore, the phenotype prediction accuracy in Scheme B drops significantly.Samples in small clusters are more important for phenotype prediction. To verify the universality of the above conclusions, we performed similar calculations on rice FLL.The rice genomic data was reduced to two dimensions based on PCA.Rice samples were divided into three clusters by K-means (Figure 7A).The number of samples in Clusters 1, 2, and 3 are 84, 234, and 59, respectively The impact of reducing samples in different clusters on prediction results.The predicted phenotype is wheat GL.Cluster 1 has the least samples.The phenotype prediction accuracy decreased the most when reducing the samples in cluster 1. (Supplementary Table S4).Their centroid coordinates are (123.6,-56.5), (-73.0, -1.2), and (113.6, 85.2), respectively.The four schemes in Figure 7B are used for FLL prediction.Due to the small total sample size, 40 samples were selected as an additional test set.In Schemes A, B, C, and D, the prediction accuracy of FLL is 0.614, 0.478, 0.580, and 0.441, respectively (Figure 7C).When the sample size of Cluster 3 is reduced by 40, the FLL prediction accuracy decreases significantly (= 0.614-0.441).When the same situation occurs in Cluster 2, the FLL prediction accuracy is only slightly reduced (= 0.614-0.580).The decrease in prediction accuracy is negatively related to the cluster sample size.The FLL standard deviation in Scheme A is 0.106.The standard deviation of FLL is significantly larger than that of GL because the sample size of rice (= 377) is much smaller than that of wheat (= 2000). In summary, different varieties of plants can be divided into clusters through PCA.Sample sizes can vary widely between clusters.Therefore, our gene files are unbalanced data.Stratified sampling can improve the stability and accuracy of phenotypic prediction.The sample size of small clusters is crucial for phenotypic prediction.If the phenotypic prediction accuracy does not meet the application requirements, increasing the sample size of small clusters is a very effective method.Meanwhile, it can also improve prediction stability. Compared with the existing GS models, PNNGS shows significant advantages.However, to maximize the prediction accuracy, we recommend training PNNGS in the following way.PNNGS requires a grid search to obtain the optimal number of parallelism.Stratified sampling can improve both the prediction stability and accuracy of PNNGS.We must perform PCA and clustering on the genomic data for stratified sampling.In addition, the more clusters there are, the better the PNNGS prediction is.If the prediction accuracy of PNNGS still cannot meet the application requirements, we need to collect more small cluster samples.The prediction accuracy of PNNGS increases with the increase of phenotypic heritability.We ideally apply PNNGS to phenotypes with high heritability.Current GS models cannot achieve high prediction accuracy for phenotypes with low heritability.Through the above steps, compared with the existing GS model, the prediction accuracy of PNNGS is improved by 0.031, and the prediction standard deviation is reduced by 25%. Conclusion Previous deep learning for GS is serial.Our study introduces parallel structure into GS for the first time.The convolution kernel size of each branch is different.At the same time, residual connections are also added to each branch.Since the Pearson correlation coefficient cannot be a loss function, we train PNGS through four Lp functions.Through grid search, the optimal parallel numbers for rice, sunflower, wheat, and maize are 4, 6, 4, and 3, respectively.In 24 phenotypic prediction cases of rice, sunflower, wheat, and maize, PNNGS outperformed RRBLUP, RF, SVR, and DNNGP, which shows that PNNGS is highly robust.Compared with DNNGP, the average phenotype prediction accuracy of PNNGS increased by 0.031.From the perspective of NRMSE, PNNGS ranked first in all phenotype predictions.It makes sense for GS to introduce a parallel structure.Random sampling makes phenotypic predictions unstable.Through PCA and K-means, plants can be divided into different clusters.The standard deviations of PL are 0.032, 0.029, and 0.024 through random sampling, 3-cluster stratified sampling, and 100-cluster stratified sampling, respectively.The prediction stability of PNNGS with stratified sampling is significantly improved.PNNGS is trained to predict GL after reducing 200 training samples in each cluster.When reducing samples in small clusters, the prediction accuracy of GL drops significantly.As the number of samples in large clusters decreases, the prediction accuracy of GL decreases slightly.A similar phenomenon occurs with rice.The small cluster sample size is critical for phenotypic prediction.We should collect more plants located in small clusters. If the attention mechanism is added, the prediction accuracy of PNNGS is expected to be further improved.Meanwhile, the artificially set parameters in PNNGS should be reduced as much as possible.PNNGS is a deep integration of biological technology and information technology in the seed industry.It can breed new varieties of plants and animals faster, better, and more efficiently.With the advancement of deep learning architecture and the increase of plant gene/phenotype data, GS is increasingly showing its superiority. FIGURE 1 FIGURE 1 Schematic diagram of PNNGS.Plant phenotypes and genome sequences are collected.Homozygous dominant, heterozygous, and homozygous recessive are coded as 2, 1, and 0. The input and output of PNNGS are the genome matrix and plant phenotype, respectively.The parallel module includes multiple convolutions of different kernel sizes.(A) architecture of PNNGS; (B) structural details of the parallel module; (C) residual convolution with different kernel sizes; (D) PNNGS calculation process with three branches; (E) four Lp loss functions.PNNGS, a parallel neural network for genomic selection. FIGURE 2 FIGURE 2 Correlations between different phenotypes.The value in the grid is the Pearson correlation coefficient of the two phenotypes.The correlations between different phenotypes are small.The correlation coefficient of the same phenotype under different environments is large.The distribution of PB is scattered, and the distribution of TR is relatively concentrated.(A) the Pearson correlation coefficient between different rice phenotypes; (B) the Pearson correlation coefficient between different sunflower phenotypes; (C) the Pearson correlation coefficient between different wheat phenotypes; (D) the Pearson correlation coefficient of maize days to silk under different environments; (E) the Distribution of sunflower PB and TR.In rice, FLL, flag leaf length; LP, leaf pubescence; PL, panicle length; PH, plant height; SNPP, seed number per panicle; SSA, seed surface area.In sunflower, FDD, flower head diameter; LPE, leaf perimeter; PB, primary branches; SC, stem color; SDF, stem diameter at flowering; TR, total RGB.In wheat, GH, grain hardness; GL, grain length; GP, grain protein; GW, grain width; TKW, thousand-kernel weight; TW, test weight.06PR, 07A, 07CL1, 26M3, 065, and 06CL1 are the codes for the six maize environments. FIGURE 5 FIGURE 5 Ten-fold cross-validation calculation results of PNNGS.The calculated phenotype is the GL of wheat.The prediction accuracy of ten-fold crossvalidation fluctuates widely.Phenotypic prediction accuracy has little correlation with the phenotypic difference between the training and the test sets.PCA is performed on genomic data.The data set is divided into three clusters through K-means.Stratified sampling analysis is performed based on clustering results.(A) prediction accuracy obtained by ten-fold cross-validation and its distribution; (B) PCA and K-means clustering; (C) calculation results for 3 clusters and 100 clusters.PCA, Principal component analysis. FIGURE 7 FIGURE 7The impact of sample reduction on rice phenotype prediction accuracy.The prediction target is rice FLL.(A) PCA and K-means clustering; (B) four schemes to divide the FLL training set and test set; (C) FLL prediction results of four schemes. FIGURE 6 FIGURE 6 (A) four schemes to divide the training set and test set; (B) Prediction results of four schemes. TABLE 1 Phenotype prediction accuracy with different numbers of parallelism. The calculation model is PNNGS.SNPP, FHD, GH, and DS_065 represent the phenotypes of rice, sunflower, wheat, and maize, respectively.The best predictions are in bold.
9,115.4
2024-09-03T00:00:00.000
[ "Computer Science" ]
Jet fragmentation transverse momentum measurements from di-hadron correlations in $\sqrt{s}$ = 7 TeV pp and $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV p-Pb collisions The transverse structure of jets was studied via jet fragmentation transverse momentum ($j_{\rm{T}}$) distributions, obtained using two-particle correlations in proton-proton and proton-lead collisions, measured with the ALICE experiment at the LHC. The highest transverse momentum particle in each event is used as the trigger particle and the region $3<p_{\rm{Tt}}<15$ GeV/$c$ is explored in this study. The measured distributions show a clear narrow Gaussian component and a wide non-Gaussian one. Based on Pythia simulations, the narrow component can be related to non-perturbative hadronization and the wide component to quantum chromodynamical splitting. The width of the narrow component shows a weak dependence on the transverse momentum of the trigger particle, in agreement with the expectation of universality of the hadronization process. On the other hand, the width of the wide component shows a rising trend suggesting increased branching for higher transverse momentum. The results obtained in pp collisions at $\sqrt{s}$ = 7 TeV and in p-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV are compatible within uncertainties and hence no significant cold nuclear matter effects are observed. The results are compared to previous measurements from CCOR and PHENIX as well as to Pythia 8 and Herwig 7 simulations. Introduction Jets are collimated sprays of hadrons originating from the fragmentation of hard partons produced in high-energy particle collisions. Studying the jet fragmentation can provide information about QCD color coherence phenomena, such as angular ordering [1], and constrain hadronization models [2][3][4]. The transverse fragmentation of partons is often studied using the jet fragmentation transverse momentum, j T , that describes the momentum component of particles produced in the fragmentation perpendicular to the momentum vector of the hard parton initiating the fragmentation. Previously, j T has been studied using two-particle correlations by the CCOR collaboration at ISR with pp collisions at center-of-mass energy Jet fragmentation in QCD consists of two different steps [10]. After the hard scattering, partons go through a QCD induced showering step, where gluons are emitted and the high virtuality of the parton is reduced. Since the transverse momentum scale (Q 2 ) is large during the showering, perturbative QCD calculations can be applied. When Q 2 becomes of the order of Λ QCD , partons hadronize into final-state particles through a non-perturbative process. Two distinct components, related to the showering and hadronization phases, can be identified from the measured j T distributions. The presence of a heavy nucleus as in p-A collisions might alter the fragmentation process. One possible mechanism for this is initial or final-state scattering of partons inside the nucleus. This is expected to lead to a broadening of jets, since the scattered partons are likely to deviate from their original direction [11]. Also the nuclear parton distribution functions can change the relative contributions of quarks and gluons compared to free nucleons, for example via gluon saturation and shadowing effects [12,13]. Understanding the implications of these cold nuclear matter effects will provide an important baseline for similar measurements in heavy-ion collisions. In this paper, the j T distributions are studied using two-particle correlations, measured by the ALICE detector in √ s = 7 TeV pp and √ s NN = 5.02 TeV p-Pb collisions. The correlation approach is chosen as opposed to full jet reconstruction based on the discussion in Ref. [14,15], where it is argued that twoparticle correlations are more sensitive to the soft and non-perturtabive parts of the jet fragmentation. This is important for the separation of the two j T components and in searching for cold nuclear matter effects that are expected to play a larger role at lower momenta. This paper is structured as follows. The event and track selection together with the used data samples are described in Section 2. The analysis details are discussed in Section 3, followed by the systematic uncertainty analysis in Section 4. The obtained results are shown in Section 5 and the observations are summarized in Section 6. Experimental setup and data samples This analysis uses two different datasets. The √ s = 7 TeV pp (3.0 · 10 8 events, integrated luminosity L int = 4.8 nb −1 ) collisions were recorded in 2010 and the √ s NN = 5.02 TeV p-Pb (1.3 · 10 8 events, L int = 620 nb −1 ) collisions were recorded in 2013 by the ALICE detector [16]. The details of the performance of the ALICE detector during LHC Run 1 (2009Run 1 ( -2013 are presented in Ref. [17]. The charged particle tracks used in this analysis are reconstructed using the Inner Tracking System (ITS) [18] and the Time Projection Chamber (TPC) [19]. The tracking detectors are located inside a large solenoidal magnet which provides a homogeneous magnetic field of 0.5 T. They are used to reconstruct the tracks within a pseudorapidity range of |η| < 0.9 over the full azimuth. The ITS consists of six layers of silicon detectors: the two innermost layers are the Silicon Pixel Detector (SPD), the two middle layers are the Silicon Drift Detector (SDD) and the two outermost layers are the Silicon Strip Detector (SSD). The TPC is a gas-filled detector capable of providing three-dimensional tracking information over a large volume. Combining information from the ITS and the TPC, the momenta of charged particles from 0.15 to 100 GeV/c can be determined with a resolution ranging from 1 to 10 %. For tracks without the ITS information, the momentum resolution is comparable to that of ITS+TPC tracks below transverse momentum p T = 10 GeV/c, but for higher momenta the resolution reaches 20 % at p T = 50 GeV/c [17,20]. Charged particle tracks with p T > 0.3 GeV/c in the region |η| < 0.8 are selected for the analysis. Events are triggered based on the information of the V0 detector [21] together with the SPD. The V0 detector consists of two scintillator stations, one on each side of the interaction point, covering −3.7 < η < −1.7 (V0C) and 2.8 < η < 5.1 (V0A). For the 2010 pp collisions, the minimum bias (MB) triggered events are required to have at least one hit from a charged particle traversing the SPD or either side of the V0. The pseudorapidity coverage of the SPD is |η| < 2 in the first layer and |η| < 1.5 in the second layer. Combining this with the acceptance of the V0, the particles are detected in the range −3.7 < η < 5.1. The minimum bias trigger definition for the 2013 p-Pb collisions is slightly different. Events are required to have signals in both V0A and V0C. This condition is also used later offline to reduce the contamination of the data sample from beam-gas events by using the timing difference of the signal between the two stations [17]. For the pp collisions, similar track cuts as in Ref. [22] are used: at least two hits in the ITS are required, one of which needs to be in the three innermost layers, and 70 hits out of 159 are required in the TPC. In addition, the distance of the closest approach (DCA) of the track to the primary vertex is required to be smaller than 2 cm in the beam direction. In the transverse direction, a p T dependent cut DCA < 0.0105 cm + 0.035 cm · p −1.1 T is used, where p T is measured in units of GeV/c. These track cuts are tuned to minimize the contamination from secondary particles. For the p-Pb collisions the tracks are selected following the so called hybrid approach, which is described in detail in Ref. [23]. This approach differs from the one presented above in the selection of ITS tracks. The tracks with at least one hit in the SPD and at least two hits in the whole ITS are always accepted. In addition, tracks with fewer than two hits in the ITS or no hits in the SPD are accepted, but only if an additional vertex constraint is fulfilled. The DCA cuts are also looser: smaller than 3.2 cm in the beam direction and smaller than 2.4 cm in the transverse direction. With this track selection, the azimuthal angle (ϕ) distribution is as uniform as possible, because it is not affected by dead regions in SPD. This is important for a two-particle correlation analysis. The momentum resolutions of the two classes of particles are comparable up to p T ≈ 10 GeV/c, but after that, tracks without ITS requirements have a worse resolution [17,20]. Analysis method The analysis is performed by measuring two-particle correlation functions. In each event, the trigger particle is chosen to be the charged particle with the highest reconstructed p T inside the acceptance region, called the leading particle. For the momentum range studied in the analysis, simulation studies show that the direction of the leading particle can be assumed in good approximation that one of the jet axis, which is the axis defined by the momentum vector of the hard parton initiating the jet fragmentation. The associated particles close in the phase-space to the leading one are then interpreted as jet fragments. The jet fragmentation transverse momentum, j T , is defined as the component of the associated particle momentum, p a , transverse to the trigger particle momentum, p t . The resulting j T is illustrated in Fig. 1. The length of the j T vector is (1) Figure 1: (Color online). Illustration of j T and x . The jet fragmentation transverse momentum, j T , is defined as the transverse momentum component of the associated particle momentum, p a , with respect to the trigger particle momentum, p t . The fragmentation variable x is the projection of p a to p t divided by p t . It is commonly interpreted as a transverse kick with respect to the initial hard parton momentum that is given to a fragmenting particle during the fragmentation process. In other words, j T measures the momentum spread of the jet fragments around the jet axis. In the analysis, results are presented in bins of the fragmentation variable x which is defined as the projection of the momentum of the associated to the trigger particle one, divided by the momentum of the trigger particle: This is also illustrated in Fig. 1. Because x is defined as a fraction of the trigger particle momentum, it is intuitive to define a three-dimensional near side with respect to the axis defined by the trigger momentum. The associated particle is defined to be in the near side if it is in the same hemisphere as the trigger particle: The results have been binned in x rather than associated particle transverse momentum (p Ta ) because the definition of j T (Eq. (1)) has an explicit p Ta dependence. Bins in p Ta would bias the results since pairs with larger j T are more likely to be in bins of larger p Ta . In the case of x this bias is not present, since x and j T measure momentum components along perpendicular axes. Another advantage for using x is that the relative p T of the associate particles with respect to trigger p T (p Tt ) stays the same in different p Tt bins. It was verified with a PYTHIA 8 [2,24] Monash tune simulation that the average fraction of the leading parton momentum taken by the leading particle ( z t ) varies less than 0.05 units inside the used x bins 0.2 < x < 0.4, 0.4 < x < 0.6, and 0.6 < x < 1.0, with lower p Tt bins having slightly larger z t than higher bins. The extracted j T distribution is of the form where N trigg is the number of triggers, N pairs (p Tt , x , j T ) is the number of trigger-associated pairs, ∆ j T is the bin width of the used j T bin, C associated (p Ta ) is the single track efficiency correction for the associated particle and C Acc (∆η, ∆ϕ) is the pair acceptance correction. The single track efficiency correction is estimated by Monte Carlo simulations of PYTHIA 6 [25], PYTHIA 8 or DPMJET [26] events, using Soft radiation (FSR on -FSR off) Gaussian fit to FSR off Inverse Gamma fit to soft radiation GEANT3 [27] detector simulation and event reconstruction. The pair acceptance correction is the inverse of the normalized mixed event distribution sampled at the corresponding (∆η, ∆ϕ) value. In the mixed event distribution, away-side particles must be included to properly correct for detector and acceptance effects. In this study, two distinct components are extracted from the j T distribution. A PYTHIA 8 simulation was performed to gain support for the separation of these components. To create a clean di-jet event sample, PYTHIA 8 was initialized to produce two hard gluons with a constant invariant mass for each event. The final-state QCD shower in PYTHIA 8 is modeled as a timelike shower, as explained in Ref. [28]. Two simulations were studied, one where the final-state shower was present and one where it was disabled. Without the final-state shower, the hadronization of the leading parton via Lund string fragmentation [29] develops without a QCD showering phase preceding it. When the final-state shower is allowed, the partons go through both showering and hadronization. The results of this study are presented in Fig. 2. The squares show a nearly Gaussian distribution resulting from the case when the final-state shower is disabled. The circles are obtained when the final-state shower is enabled. A long tail is observed which was not seen in the case with final-state shower off. To estimate the QCD showering component, it is assumed that hadronization dominates at low j T , and the distributions from the two simulations coincide at j T = 0. This scaling is then applied, since without QCD splittings the partons hadronize at higher scale, producing more particles. With the subtraction of the "hadronization only" -distribution from the total one, the QCD showering part can be separated. This is represented by the diamond symbols in Fig. 2. This study shows a possible factorization of the showering and hadronization parts of the jet fragmentation in PYTHIA 8. Based on simulations, template fit functions for hadronization and showering components have been estimated and used to extract the corresponding terms from the data. Since j T is a two-dimensional vector, using two-dimensional forms for the fit functions allows to extract the final results from the functions more easily. Assuming that there is no dependence on the polar angle of the vector, the angle can be integrated out and the distributions written as a function of the length of the vector. The hadronization part can be described by a Gaussian: and the showering part by an inverse gamma function of the form: where A 1...5 are the free fit parameters. In this paper, the hadronization part will be called the narrow component and the showering part the wide component. To determine the j T distribution in data, all charged particles inside each x bin are paired with the leading particle and j T is calculated for each of these pairs in an event. In the data, in addition to the signal, a background component mostly due to the underlying event is observed. Examples of measured j T distributions with background included and subtracted are presented in Fig. 3. An η-gap method is used to estimate the background contribution. Pairs with |∆η| > 1.0 are considered as background from the underlying event. The background templates for the analysis are built by randomizing the pseudorapidities for the trigger and the associated particles, following the inclusive charged particle pseudorapidity distributions. Twenty randomized pairs are generated from each background pair to improve the statistics for the background. The template histograms, generated in bins of p Tt and x , are then fitted to the j T distribution together with a sum of a Gaussian function and an inverse gamma function. It can be seen from Fig. 3 that the fit is in good agreement with the data, except in the region around j T ∼ 0.4 GeV/c, where the data shows an increase with respect to the fit function. PYTHIA studies show that this structure is caused by correlations from neutral meson decays, dominated by decays of ρ 0 and ω, where one of the decay daughters is the leading charged particle in the event. The effect of this structure is taken into account in the evaluation of the systematic uncertainties. The goal of the analysis is to determine the root-mean-square (RMS) values and yields of the narrow and wide j T components. These are calculated from the parameters of the fit functions in equations (5) and (6). Systematic uncertainties The systematic uncertainties considered for this analysis arise from the background determination, the signal fitting procedure and the cuts used to select the tracks. The uncertainties related to the tracking are estimated from variations of the track selection cuts defined in Section 2. The resulting variations of the RMS and yield are below 3 % in most cases, but effects up to 17 % are observed for the yield of the wide component. The tracking efficiency contributes to the uncertainty of the yields only. This uncertainty is estimated from the difference between data and simulation in the TPC-ITS track matching efficiency as is previously done in Refs. [30] and [31]. For pp collisions this uncertainty is 5 % and for p-Pb ones 4 %. The effect due to the subleading track being reconstructed as a leading track was studied using simulations and found to be negligible due to steep slope of the trigger spectrum. The main source of uncertainty from the background evaluation comes from the background region definition. As an alternative method to the default procedure, uncorrelated background templates are generated from particles with R = ∆ϕ 2 + ∆η 2 > 1 instead of those at large ∆η, and pseudorapidities for the particle pairs are randomized together with azimuthal angles. The associated uncertainty is typically below 5 %, but for the yield of the wide component the uncertainty can grow up to 46 % in the lowest p Tt and x bins where the signal to background ratio is the worst (0.84 for pp and 0.33 for p-Pb). Changing the size of the η-gap produces small uncertainties compared to other sources, usually below 2 %. The effect of changing the number of new pairs generated for the background from 20 to 15 or 25 was also checked, but this was found to be negligible and is not included in the total uncertainties. The dominant source of uncertainty results from decaying neutral mesons. Even though this is a physical correlation in the j T distribution, it cannot be attributed to QCD showering or hadronization. The effect of the decay mesons is estimated from a variation of the fit range, excluding the region where the data shows an increase with respect to the fit function. The excluded regions are 0.25 < j T < 0.45 GeV/c, 0.2 < j T < 0.6 GeV/c or 0.2 < j T < 0.65 GeV/c for the x bins 0.2 < x < 0.4, 0.4 < x < 0.6 and 0.6 < x < 1.0, respectively. For the yield of the wide component the uncertainty can go up to 60 % in the 0.4 < x < 0.6 bin at low p Tt . In most cases, this uncertainty is well below 10 %. For the signal fit, the difference between fitting the background and the signal simultaneously and only the signal, after background subtraction, was evaluated. The uncertainty from this source was found to be typically smaller than 3 %, which is small compared to other sources. The different sources of systematic uncertainties were considered as uncorrelated and added in quadrature accordingly. In general, the systematic uncertainties for the wide component are larger than for the narrow component, since the signal to background ratio is significantly smaller for the wide component. Also the uncertainties for the yield are larger than for the RMS. The uncertainties are also p Tt and x dependent. For different results and datasets, the total systematic uncertainties vary within the ranges summarized in Tab. 1. The smallest uncertainty of 1.6 % for the narrow component RMS is found for the 0.2 < x < 0.4 and highest p Tt bins while the largest uncertainty of 73 % for the yield of the wide component is found from the 0.4 < x < 0.6 and lowest p Tt bins . The systematic uncertainty estimation is done also for the PYTHIA and Herwig simulations, which are compared to the data. As the same analysis method is used for simulations and data, also the same methods to estimate the systematic uncertainty can be applied. For the simulations, the uncertainty is Results and discussions The per trigger yields and widths of the j T distributions are determined as a function of the transverse momentum of trigger particle in the range 3 < p Tt < 15 GeV/c for three x bins 0.2 < x < 0.4, 0.4 < x < 0.6 and 0.6 < x < 1.0. The results are obtained from the area and RMS of the fits to the narrow and wide components of the j T distribution. The RMS j 2 T values for both components in different x bins from √ s = 7 TeV pp and √ s NN = 5.02 TeV p-Pb collisions are compared with PYTHIA 8 tune 4C [32] simulations with the same energies in Fig. 4. The narrow component results show only weak dependence on p Tt in the lowest x bin and no dependence on p Tt in the higher x bins. These behaviours is sometimes referred to as universal hadronization. There is also no difference between pp and p-Pb collisions. PYTHIA 8 simulations for the two energies give consistent results that are in agreement with data, within uncertainties. Comparing the three panels in Fig. 4, it can be seen that j 2 T is larger in higher x bins for both components. Kinematically, if the opening angle is the same, larger associated momentum translates into larger j T . Jets with larger momenta are known to be more collimated, but the net effect of these two might still increase j T . Also if the trigger particle is not perfectly aligned with the jet axis but there is non-negligible j T between these two axes, j T will be widened more in the higher x bins. For the wide component, it can be seen that there is a rising trend in p Tt in both pp and p-Pb collisions as well as in PYTHIA 8 simulations. This can be explained by the fact that higher p T partons are likely to have higher virtuality, which allows for more phase space for branching thereby increasing the width of the distribution. Seeing that PYTHIA 8 simulations at √ s = 7 TeV and √ s = 5.02 TeV are in agreement, no difference related to the collision energy is expected in the real data either. Taking this into account, the fact that the pp and p-Pb agree within the uncertainties suggests that no significant cold nuclear matter effects can be observed in the kinematic range where this measurement is performed. The results for the per trigger j T yield are presented in Fig. 5. The yield of the narrow component in data shows mostly no dependence on p Tt , with the exception of the lowest x bin where the yield rises with p Tt for p Tt < 8 GeV/c. The trend in the PYTHIA 8 simulation is different though, the yield is decreasing as p Tt grows. The simulation also overestimates the data for the yield of the narrow component. The discrepancy between the simulation and the data is around 50 % in the lowest p Tt and x bins. The overestimation of the yield was observed earlier in an underlying event analysis in pp collisions at √ s = 0.9 and 7 TeV [22]. The yield of the wide component shows a rising trend as a function of p Tt . This is expected if more splittings happen at higher p Tt , which would also explain the trend for the width. PYTHIA 8 simulations are in good agreement with the data for the yield of the wide component. ) c (GeV/ Fig. 6. In this figure, the narrow and wide component j 2 T for √ s = 7 TeV pp collisions are compared to PYTHIA 8 tunes 4C and Monash [33], and to Herwig 7 [3,4] tune LHC-MB. Notice that the pp data points and PYTHIA 8 tune 4C curves are the same as in Fig. 4. The narrow component is best described by PYTHIA 8 tune 4C. The Monash tune is approximately 10 % above the data and Herwig 7 has a stronger x dependence than PYTHIA 8 or data. For the wide component, both PYTHIA 8 tunes are compatible with the data for most of the considered intervals. Herwig 7 agrees well with the data in the lowest x bins. All three simulation curves overestimate the RMS at low p Tt in the 0.6 < x < 1.0 bin. At high p Tt , the central values of Herwig are larger than the data for x > 0.4, but the results are still consistent within the uncertainties. The same PYTHIA 8 and Herwig 7 tunes are compared to the √ s = 7 TeV pp yield in Fig. 7. Again, in this figure the pp and PYTHIA 8 tune 4C results are the same as in Fig. 5. For the narrow component, all the tunes overestimate the yield in most of the explored kinematic region. Herwig 7 shows a slightly Fig. 8. These experiments use different methods to extract j T from the data. In CCOR, j T is obtained from a fit to an away side p out distribution, where p out is the momentum component of a charged track going outside of the plane defined by the trigger particle and the beam axis. They use the fit function where x E = − p Ta · p Tt /|p Tt | 2 and the fit parameter k Ty is the y-component of the transverse momentum of the partons entering the hard scattering. The k Ty parameter needs to be included in the formula, since CCOR only studies distributions on the away side. PHENIX calculates j 2 T from a Gaussian fit to the azimuthal angle distribution using the relation where σ N is the width of the fitted Gaussian. At the lower collision energies of ISR and RHIC, no evident wide component was observed in the data and thus only one component for j T was extracted by CCOR and PHENIX. This is connected to the current analysis given that especially at the lower energies the high-p T trigger particles are likely to have a high z t . PHENIX reported in [6] that this value is z t ∼ 0.6. Since ISR had lower collision energy than RHIC, z t can not be lower in the CCOR experiment. In case the trigger particle takes most of the momentum of the leading parton, there is less phase space available for soft gluon radiation during the QCD showering phase. Thus, it appears that the dominant contribution to the particle yield comes from the hadronization part of the fragmentation, and the single component results may be compared to the narrow component results in this analysis. The PHENIX results are compatible with the ALICE results for bin 0.4 < x < 0.6 and the CCOR results are close to the ALICE results for bin 0.6 < x < 1.0. However, a comparison in the same bins is not possible because of the bias p Ta selections induce for this analysis. Conclusions A new method to extract two distinct j T components for a narrow (hadronization) and wide (QCD branching) contribution using two-particle correlations was presented in this work. The RMS and per trigger yield were obtained for both components. The width of the narrow component shows only a weak dependence on the trigger particle transverse momentum and no difference between pp and p-Pb collisions. The results from this analysis are also qualitatively compatible with the previous ones at lower √ s, measured by the PHENIX and the CCOR experiments. All of these observations support the universal hadronization expectation. The width of the wide component is found to increase for increasing p Tt in all x bins. This can be explained by stronger parton splitting, which is allowed by a larger phase space. A similar argument can be used to explain why the wide component has not been previously observed at the ISR or at RHIC since the larger collision energy at the LHC increases phase space for QCD splittings. As there is no difference in the wide component RMS between pp and p-Pb, cold nuclear matter effects do not play a large role in this kinematic regime. PYTHIA 8 and Herwig 7 simulations describe the widths for both components well, but both simulations overestimate the yield of the narrow component. These measurements could be used to further constrain the parameters in the models to better reproduce the data. An interesting follow-up study would be to look at the same measurement in heavy-ion collisions. As it is shown that there are no cold nuclear matter effects in p-Pb j T distributions, any modifications in the distributions could be attributed to final-state effects, such as partonic energy loss in the quark-gluon plasma. The wide component might be able to discriminate between different jet shape modification mechanisms in Pb-Pb collisions, like interactions with the plasma [34], color decoherence effects [35], and changes in relative quark and gluon jet fractions [36]. Acknowledgements We wish to thank Torbjörn Sjöstrand for his help in defining a di-gluon initial state in PYTHIA 8. The ALICE Collaboration would like to thank all its engineers and technicians for their invaluable contributions to the construction of the experiment and the CERN accelerator teams for the outstanding performance of the LHC complex. The
7,093.8
2018-11-24T00:00:00.000
[ "Physics" ]
Comparison of the cox regression to machine learning in predicting the survival of anaplastic thyroid carcinoma Background To compare the ability of the Cox regression and machine learning algorithms to predict the survival of patients with Anaplastic thyroid carcinoma (ATC). Methods Patients diagnosed with ATC were extracted from the Surveillance, Epidemiology, and End Results database. The outcomes were overall survival (OS) and cancer-specific survival (CSS), divided into: (1) binary data: survival or not at 6 months and 1 year; (2): time-to-event data. The Cox regression method and machine learnings were used to construct models. Model performance was evaluated using the concordance index (C-index), brier score and calibration curves. The SHapley Additive exPlanations (SHAP) method was deployed to interpret the results of machine learning models. Results For binary outcomes, the Logistic algorithm performed best in the prediction of 6-month OS, 12-month OS, 6-month CSS, and 12-month CSS (C-index = 0.790, 0.811, 0.775, 0.768). For time-event outcomes, traditional Cox regression exhibited good performances (OS: C-index = 0.713; CSS: C-index = 0.712). The DeepSurv algorithm performed the best in the training set (OS: C-index = 0.945; CSS: C-index = 0.834) but performs poorly in the verification set (OS: C-index = 0.658; CSS: C-index = 0.676). The brier score and calibration curve showed favorable consistency between the predicted and actual survival. The SHAP values was deployed to explain the best machine learning prediction model. Conclusions Cox regression and machine learning models combined with the SHAP method can predict the prognosis of ATC patients in clinical practice. However, due to the small sample size and lack of external validation, our findings should be interpreted with caution. Supplementary Information The online version contains supplementary material available at 10.1186/s12902-023-01368-5. Introduction Thyroid carcinoma is the fifth most common cancer among women in the United States [1].In recent decades, the incidence of thyroid carcinoma has increased dramatically in many countries [2].A study of the United States analyzed the 10-year data from 2007 to 2016, and reported that the incidence of thyroid carcinoma among young people of all ages (15-39 years old) ranked the top three [3].Among thyroid carcinoma, anaplastic thyroid carcinoma (ATC) accounts for 1-2% [4], but it is the most aggressive type and highly malignant, which is the main cause of death associated with thyroid malignant tumors.The median survival time of ATC is only 5-6 months [5].The quality of life among ATC patients is significantly reduced, coupled with persistent occupation of medical resources and high mortality rate, which result in a heavy economic and social burden.Therefore, accurate prediction of ATC patient survival and understanding the drivers of these predictions are critical for clinically targeted therapy. The known risk factors related to the prognosis of ATC include age, sex, race, marital status, insurance, socioeconomic status, level of education, tumor stage, tumor size, multifocality, surgery, radiotherapy, chemotherapy and so on [6][7][8][9][10][11]. Additionally, the AJCC 8th edition reveals a better performance than the AJCC 7th edition TNM staging in predicting survival of ATC patients [12].Traditional methods for predicting survival of ATC patients are based on existing clinical and sociodemographic predictors, using Cox proportional hazards (Cox) regression analysis to establish nomograms [12][13][14][15][16].Although the estimated C-index calculated by some models appears to be ideal, there is still a risk of overfitting.With the rapid development of precision medicine, machine learning (ML) has been widely applied in medical fields such as outcome prediction, diagnosis, medical image interpretation and treatment [17].Applications of ML in thyroid carcinoma consist of diagnosis, nodule identification and risk factor analysis [18][19][20][21].However, rare data show applications of ML for prognostic analysis in ATC patients.ML does not need to assume the relations between input variables and outputs variables, as well as takes into account all possible interactions and effect corrections between variables [22].More importantly, ML is an efficient and accurate substitute of semi-parametric and parametric models. In this study, we aimed to compare the application of Cox regression and ML algorithms for survival prediction among ATC patients.Strategies aimed at selecting most suitable predictive model could help clinicians to intervene risk factors timely and prescribe treatments properly, enhancing the understanding of decision-making process for assessing ATC. Predictor variables and outcomes We collected all relevant data including Age, Sex, Race, Marital status, Insurance, No high school diploma, Families below poverty, AJCC TMN stage, Tumor size, Multifocality, Regional lymph node surgery, Thyroid surgery, Radiotherapy, Chemotherapy.According to the AJCC 8th edition TNM staging system for thyroid cancer, different TNM staging were converted into 8th edition TNM staging uniformly [23].The T staging of ATC is not only classified in T4 stage.We use the same definition for T staging for ATC and differentiated thyroid cancer (DTC).According to tumor size and tumor extension, the different T staging of AJCC were unified into AJCC 8th edition T staging, which was divided into T1 stage, T2 stage, T3a stage, T3b stage and T4 stage.In this study, X-tile software was used to analyze continuous variables to obtain the best cut-off value and group them.The variables analyzed by X-tile software included: Age, No high school diploma, Families below poverty, Tumor size.In addition, groups with small numbers were merged: T1 and T2 stages of AJCC were merged into T1-2, and T3a and T3b stages were merged into T3 stage. The primary endpoint of the study were overall survival (OS) and cancer-specific survival (CSS).OS was defined as the time interval from diagnosis to death from all causes, and CSS was from diagnosis to death from that tumor alone.According to the different types of outcomes, we divided them into binary outcomes: 6-month OS, 12-month OS, 6-month CSS and 12-month CSS.In addition, the outcomes were also divided into time-toevent data for analysis. Data preprocessing We counted the missing rates of all predictors, and retained factors with a missing rate of less than 30%.K-Nearest Neighbor (KNN) algorithms were used to fill missing values.Multicollinearity is explained by the variance inflation factor (VIF).VIF < 10 indicates that there is no multicollinearity among the variables.Correlation was determined by Spearman correlation analysis.A correlation coefficient greater than 0.5 indicates a significant correlation between variables. Model development and evaluation For binary outcomes, we used four machine learning algorithms, Logistic, Random Forests, Extreme Gradient Boosting (XGBoost) and Adaptive Boosting (AdaBoost), to construct models and compared the pros and cons of these models.Similarly, for time-event outcomes, we compared the models constructed by COX regression with five machine learning algorithms: Survival Tree, Survival Support Vector Machine (SVM), Random Survival Forests, XGBoost and DeepSurv.In this study, 70% of all patients were used for training and 30% for validation using random number table method.The differences between the training set and validation set depended on the type of outcome variable.If the outcome variable was continuous, t-test was used, while if the outcome variable was categorical, chi-square test or Fisher's exact test was used.In Cox regression, we use the bidirectional stepwise regression method for variable screening which automatically screens the variables with the smallest Akaike information criterion (AIC) to construct the model.The results of the Cox regression model are presented in the nomogram.In machine learnings, we use the XGBoost method to filter variables.We use a combination of grid search and multiple cross-validation to select the parameter values corresponding to the best C-index values as model parameters [24]. In order to avoid overfitting, the evaluation of the model comprehensively considers the results of the training set and the validation set, but mainly the results of the validation set.We used the C-index to describe the discriminativeness of the model.The C-index value can generally judge the generalization ability of the model: 0.5-0.7 means that the model has a weak generalization ability, 0.7-0.85moderate, and 0.85-1.0strong.In addition, we also use multiple evaluation indicators such as accuracy, sensitivity, and specificity to comprehensively evaluate the discriminative ability of the machine learning model.We used the calibration curve and brier-score to evaluate the calibration of the model.In the calibration plot, the X-axis represents the predicted survival time and the Y-axis the actual survival time with the predicted rate falling on the 45° diagonal in a perfect prediction model.The lower the Brier-score value, the better the calibration.We assessed the net benefit of the model for clinical decision making through the DCA curve.Kaplan-Meier analysis and log-rank test were used to explore differences in survival between risk subgroups. Model interpretation The SHapley Additive exPlanation (SHAP) is a unified framework for interpreting the results of machine learning models [25].We utilized SHAP to provide explanations for the final model, including associated risk factors causing death in patients with ATC and the importance of sorting features.Our study was reported following the TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) statement [26].All statistical analyses in this study were performed using R software (version 4.0.2) and Python software (version 3.7.6).P value of < 0.05 was considered statistically significant. Patient characteristics 1190 patients diagnosed with ATC were identified from the SEER database from 2004 to 2015.According to the exclusion criteria, 730 patients were finally included.The flow-process diagram of data screening was shown in Fig. 1.Since the missing rate of each variable was < 30%, we did not remove the variable with low missing rate.The variables with the highest missing rate were insurance (22.9%), tumor size (19.5%) and multifocality (18.4%).According to the X-tile program, the optimal age cutoff points were 60 and 80 years old, and age groups were divided into < 60, 60-79 and ≥ 80 years.The optimal cutoff points divided tumor size into < 6 cm and ≥ 6 cm, no high school diploma into < 21% and ≥ 21%, and families below poverty into < 14% and ≥ 14%. Except for race (P < 0.001) and no high school diploma (P = 0.006), the clinical characteristics of patients with ATC in the training set and the validation set were not significantly different (P > 0.05).No multicollinearity was found among every variable (VIF<10).Spearman correlation showed that the correlation between no high school population and families below population was strong (0.73), which was low between other variables (all < 0.5) (Supplemental Fig. 1). Model results and model performance The results of Cox regression model were displayed in the nomogram (Fig. 2).The total score was obtained by adding the scores corresponding to each predictor.The 6-month and 12-month OS and CSS corresponding to the total score scale under the nomogram were obtained.Age, Families below poverty, AJCC T 8th, AJCC M 8th, tumor size, surgery, radiotherapy, chemotherapy were included in the nomogram for OS and CSS.The tumor stage had the greatest impact on survival, and the therapeutic schedule also affected the survival of patients with ATC: surgery, radiotherapy and chemotherapy.Timedependent ROC of the nomogram predicting 6-month, 1-year OS and CSS were shown in Supplemental Fig. 2. The ROC curves of machine learning algorithms were shown in Fig. 3. C-index values for dichotomous outcomes were summarized in Table 2, including 6-month and 12-month OS and CSS.In the training set, random forest algorithm had the largest C-index value and presented best performance (6-month OS: 0.834; 12-month OS: 0.886; 6-month CSS: 0.857; 12-month CSS: 0.910).In the validation set, the C-index values of random forests algorithm were significantly lower than that in the training set, due to possible overfitting.Combining the results of the training set and the validation set, we found that the logistic algorithm presented best performance.In the validation set, the C-index values of logistic algorithm were (6-month OS:0.790; 12-month OS:0.811; 6-month CSS:0.775;12-month CSS: 0.768).The results of survival analysis on time-to-event shown that the DeepSurv In the machine learning algorithm (logistic, random forest, XGBoost, AdaBoost), we also calculated the accuracy, sensitivity and specificity, as shown in Table 2.The logistic algorithm had high accuracy, sensitivity and specificity, which indicated that the model was effective.Combining values of C-index, accuracy, sensitivity and specificity, we found that logistic algorithm presented best performance in machine learning algorithms.According to the calibration curve, the nomogram predicted the overall survival rate of patients in the training set and the validation set and the actual overall survival rate had good consistency (Supplemental Fig. 3).In addition, the logistic algorithm also shown good consistency in logistic algorithm (Supplemental Fig. 4).As for Brierscore, the value of logistic algorithm was the minimum in machine learning algorithm, indicating the best calibration degree.The Brier-scores of Cox regression and logistic algorithm were similar. To evaluate the practicability of each model, we plotted the DCA curve (Supplemental Fig. 5 and Supplemental Fig. 6).DCA curve represents the net benefit of clinical decisions.The y-axis represents the net benefit and the x-axis represents the risk threshold.The horizontal line indicates all true negative rates and the diagonal line indicates all true positive rates.This shown that Cox regression model and logistic algorithm had good clinical applicability in predicting the 6-month and 12-month survival rates of ATC and had high net benefits.In the risk stratification KM curves, the patients were divided into high-risk and low-risk groups based on the cut-off values of the total score in the nomogram.For OS, the cut-off value was 169, and for CSS, the cut-off value was 174.As shown in Supplemental Fig. 7, the log rank p was lower than 0.0001, indicating that there was the significant difference between the high-risk group and the low-risk group.The results suggested that the nomogram had high discrimination for the degree of risk. Model interpretation of machine learning We used SHAP to explain the results of the best machine learning model.Based on the SHAP algorithm, the feature ranking interpretations of the logistic algorithm were shown in Fig. 4. The attributes of the features predicting 6-month OS, 12-month OS, 6-month CSS and 12-month CSS were shown in Fig. 4. In 6-month OS, AJCC M 8th, Chemistry, Regional lymph node surgery, Tumor size and AJCC T 8th were the characteristics of logistic algorithm models, which had the greatest impact on the prediction results.The feature ranking shown that AJCC TNM staging was an important factor for survival prediction of ATC, and AJCC M 8th was the most important feature in OS or CSS. Discussion By comparing the prediction performance of different ML algorithms to the reference method (Cox regression), our findings suggested that Cox regression performed well as a conventional method for ATC survival prediction.Among ML algorithms, Logistic algorithm demonstrated the best performance.Combining SHAP values, Logistic algorithm illustrated key predictive factors and established a high-accuracy survival prediction model.In our study, we used the Cox regression model to identify the most influential predictors and create a nomogram to predict the risk of cancer outcomes for individual patients.The nomogram provides a user-friendly tool for clinicians to assess the risk of cancer outcomes and stratify patients into low-and high-risk groups, which is useful for clinical decision-making.Furthermore, we used the SHAP method to rank the importance of predictors and differentiate their impact on the risk of cancer outcomes.This approach provides a visual and intuitive way to identify protective and risk factors and guide clinical judgment and decision-making. Our study solved the limitations of ML in predicting the prognosis of ATC survival by including more possible factors.We collected multifaceted disease-related predictors, such as baseline patient information, clinical diagnosis, medical therapy, surgery therapy and so on, we also extracted relevant variables which may influence the development of disease, such as economic condition and education.And the 8th edition of AJCC TNM staging criteria was finally applied to disease strategy for better performance.Our models showed a high C-index value, indicating a remarkable generalization ability and clinical value, providing distinct explanations helping to predict survival rate, which drove clinicians to understand the decision-making process for assessing disease severity.Different from our study, other researchers tended to apply Cox regression and Logistic regression to analyze risk factors and constructed a predictive model.Gui et al. [13] found that the important predictors for survival rate of ATC were age, historic stage, tumor size, surgery therapy, radiotherapeutic, as analyzed by multivariable Cox proportional hazard regression models.In terms of prediction performance, the nomograms showed a C-indexs value of 0.765 for OS, and 0.773 for CSS.Based on preoperative variables and postoperative variables, Qiu et al. [16] constructed two prognostic nomograms, and the C-index were 0.6783 and 0.7029.The data for the above study were obtained from the SEER database.Meanwhile, a retrospective Study from Regional Registry studied 149 patients with ATC showed that age, tumor size, distant metastasis status were independent variables, definitely affecting survival rate of ATC, as analyzed by multivariable Cox proportional hazard regression [27].Traditional Cox regression is the most convenient way to solve most survival prediction problems because its results are easy to interpret.However, Cox regression models should be used with a minimum of 10 outcome events per predictor variable (EPV) [28]. ML is an efficient and accurate substitute to semiparametric and parametric models, with the advantages of high calculating efficiency and excellent performance.ML algorithms do not consider factors of non-proportionality, multicollinearity, or nonlinearity, reducing prediction bias caused by modeling uncertainty.Unfortunately, it's application in the clinical practice is hindered by the lack of interpretability.Subsequently, SHAP comes into use, aiming to elucidate how the machine models run the output process in an easily understood term, and makes up for the disadvantages mentioned above.There has been no targeted application of machine learning algorithms to predict the survival of patients with anaplastic thyroid carcinoma (ATC).Here, we calculated subject-level survival curves by analyzing outcomes variables in binary model as well as time-event model, providing better understanding of predicted survival.The results of this paper indicated that the models built by ML incorporated fewer predictors and performed no worse than traditional Cox regression.As a substitute of Cox regression, the Logistic algorithm combined with SHAP values performed superiority in clinical applications.However, it is important to note that the predictive efficacy of Cox regression in predicting the survival of ATC patients were comparable with ML algorithms, suggesting that the superiority of ML was not always seen but was seen only in situations when the conventional methods meet their limits. Deep learning is a branch of machine learning, which requires less data engineering and achieves more accurate prediction when processing a large amount of data.Deep learning has been applied in many fields of medical practice, including image diagnosis, digital pathology, cancer prognosis, etc [29].Previous studies have shown that the performance of deep learning model in predicting survival analysis is better than that of traditional Cox regression model [30,31].We used the deep learning method, named DeepSurv, to predict the survival of ATC patients.The results show that the DeepSurv algorithm is better than Cox regression in the training set.However, no obvious advantages were seen in the validation set.It can be seen that deep learning is challenging in the application of cancer prognosis.The performance of the deep learning model depends on the amount of data [32]. When the amount of patient data is relatively small, sub optimal performance and overfitting problems are usually seen. Cox regression results showed that Age, Families below poverty, AJCC T 8th, AJCC M 8th, tumor size, surgery, radiotherapy and chemotherapy were important factors in predicting OS and CSS, among which therapeutic approaches were protective factors, including surgery, radiotherapy and chemotherapy.Importantly, older age, higher poverty rate, larger tumor size and more advanced stage suggested a poorer prognosis.Similarly, in the Logistic algorithm analysis, AJCC T 8th and AJCC M 8th were included as important factors in the survival prediction of ATC patients, which was consistent with previous research [33,34].By evaluating SHAP values, we found that AJCC M 8th was the most important predictive factor, which is consistent with previous study [13].In our study, the AJCC.N.8th edition staging was not included into predictive factors.However, regional lymph node surgery was analyzed in the prediction of 6-OS and 6-CSS when using Logistic algorithm.In addition, studies have shown that log odds of positive LN (LODDS) showed better predictive performance than AJCC N states [35].Radiotherapy and surgery, as compared with control group, improved patient outcomes, being consistent with the findings of Gui et al. [13].In addition, we found that chemotherapy was also a protective factor for the prognosis of ATC patients. This study has several limitations.First, this is a retrospective study with small sample size, which may cause bias.More large-scale prospective studies are needed to validate the efficacy of our models.Second, although we included more predictors than previous studies, such as economy, education and marriage, our study did not analyze the impact of immunotherapy and targeted therapy, which were highlighted in recent progress of ATC treatments [23].Finally, we did not perform performance comparisons with previously established predictive models because of differences in analyzing variables.In the future, we will try to build a deep learning model to predict the prognosis of ATC and conduct hierarchical researches, by analyzing more data and information. Conclusion In conclusion, our study compared the application of Cox regression and ML algorithms in survival prediction of ATC patients.The results of our study showed that Cox regression and Logistic algorithm combined with SHAP value had a good predictive effect in survival prediction of anaplastic thyroid cancer.However, due to the small sample size and lack of external validation, our results need to be viewed more cautiously. Fig. 4 Fig. 4 The Logistic model based on the SHAP algorithm.(Note: (A) The attributes of the features predicting 6-month OS; Y-axis represents features.x-axis represents the degree of influence on the outcome, Each dot represents a sample, the red dots represent the high risk value and the blue dots represent the low risk value。(B) Ranking of feature importance predicting 6-month OS; (C) The attributes of the features predicting 12-month OS; (D) Ranking of feature importance predicting 12-month OS; (E) The attributes of the features predicting 6-month CSS; (F) Ranking of feature importance predicting 6-month CSS; (G) The attributes of the features predicting 12-month CSS; (H) Ranking of feature importance predicting 12-month CSS) Table 1 Demographic characteristics of patients with ATC
5,027.4
2023-06-05T00:00:00.000
[ "Medicine", "Computer Science" ]
Five decades of corporate entrepreneurship research: measuring and mapping the field Research on corporate entrepreneurship—venturing activities by established corporations—has received increasing scholarly attention. We employ bibliometric methods to analyze the literature on corporate entrepreneurship published over the last five decades. Based on the results of citation and co-citation analyses, we reveal central works in the field and how they are interconnected. We investigate the underlying intellectual structure of the field. Our findings provide evidence of the growing maturity and interdisciplinarity of corporate entrepreneurship and provide insight into research themes. We find that resource-based view and its extensions still remain the predominant theoretical perspectives in the field. Drawing on these findings, we suggest directions for future research. Introduction Corporate entrepreneurship has become paramount for established firms, because the intensified competition with both startups and incumbents forces them to sustain their competitive advantage with innovativeness, courage, risk propensity, and entrepreneurial leadership (Covin and Miles 1999;Kuratko 2009). Scholars have shown increasing interest in better understanding organizational entrepreneurial agency over the last 50 years Dess et al. 2003;Zahra et al. 2013). The high number of publications made corporate entrepreneurship (CE) a wide, complex, scattered, and inconsistently conceptualized research field, which is difficult to oversee (Covin and Lumpkin 2011;Kuratko et al. 2014;Sharma and Chrisman 1999;Stopford and Baden-Fuller 1994). To overcome this lack of clarity, there is a need to systematize and synthesize past research and to reflect about future research avenues (Low and MacMillan 1988;Zupic and Čater 2015). Against this background, our research goal is to provide an overview of the current state of research in the field of CE and to make recommendations in regard to its further development. To achieve this goal, we employ bibliometric analyses. This methodology, rather than a more traditional literature review, allows handling large quantities of publications and is citation-frequency based and thus a less subjective approach (Zupic and Čater 2015). On our dataset of 674 documents published between 1937 and 2019, we employ citation analyses to examine the annual distribution of publications, productivity and impact of journals and authors and the most influential articles. Additionally, we use co-citation analysis to detect existing fields of research themes or "invisible colleges" in CE research (Crane 1972). By employing multivariate analysis methods to process the results of co-citation analysis, we achieve results that are highly robust. Drawing on these findings, we discuss possible directions for future research. Our research contributes to the CE literature by offering deep insights into the structure and the evolution of the knowledge base and by identifying research gaps and pointing out future research opportunities. According to Sharma and Chrisman (1999), CE takes place within established organizations and refers to the creation of new businesses, strategic renewal, or innovation within this organization. Guth and Ginsberg (1990), Stopford and Baden-Fuller (1994), Thornberry (2001) and Zahra and Covin (1995) identify three dimensions of CE. The first dimension is innovation, which is considered to be "the heart of entrepreneurship" (Stevenson and Gumpert 1985). Innovation can refer to the introduction of new products, processes, technologies, systems, techniques or capabilities to the firm or to its markets (Burgelman et al. 1988). Innovation is vital to all types of entrepreneurship (Covin and Miles 1999;Stopford and Baden-Fuller 1994). The second dimension is corporate venturing (CV), which describes the process of entering, investing in new business or adding of new businesses to an existing organization McGrath et al. 2006). Despite the existence of different types of CV, such as internal, external or cooperative CV, all these types possess one commonality, i.e., adding a new business to an existing organization (Kuratko and Audretsch 2013). CV is considered to be one of the possible ways of achieving strategic renewal (Guth and Ginsberg 1990), the third dimension of CE. Strategic renewal aims to create wealth through new resource combinations (Guth and Ginsberg 1990) and involves major strategic or structural changes within the existing organization (Sharma and Chrisman 1999). Covin and Miles (1999) suggest four possible manifestations of CE in established organizations: strategic renewal, sustained regeneration, domain redefinition, and organizational rejuvenation. Morris et al. (2011) outlined two major dimensions of CE, namely CV and strategic entrepreneurship. Although these conceptualizations may seem different, they still define innovation as the key element for CE (Covin and Miles 1999;Lampe et al. 2019). The concept of intrapreneurship, separate from CE, takes a closer look at the entrepreneurial activity from the perspective of an individual within an existing organization (Stevenson and Jarillo 1990). Initially coined by Pinchot (1985), the term "intrapreneuring" focused on the creation of new business, exploitation of new opportunities or creation of economic value by an individual or a group of individuals within an existing organization. Some scholars suggested a broader definition of this term and defined it as "entrepreneurship within an existing organization" (Antoncic and Hisrich 2001, p. 496). Strategic entrepreneurship (SE) emerged at the intersection of the fields of entrepreneurship and strategic management . SE is the integration of opportunity-seeking (entrepreneurial perspective) and advantage-seeking (strategic perspective) behaviors (Hitt et al. 2001;Ireland et al. 2003). Opportunity-seeking behavior is aimed at the identification of new opportunities, whilst advantage-seeking behavior refers to the exploitation of these opportunities to establish and maintain the firm's competitive advantage (Hitt et al. 2001). Whereas the concepts of CE and intrapreneurship reflect only the opportunity-seeking perspective, SE adds a focus on acting strategically in order to establish sustaining competitive advantage . It emphasizes that, to create wealth, both advantage-and opportunity-seeking behavior is necessary (Hitt and Ireland 2000). Another research theme is entrepreneurial orientation (EO), defined as "an organizational attribute that exists to the degree to which that organization supports and exhibits a sustained pattern of entrepreneurial behavior reflecting incidents of proactive new entry" (Covin and Wales 2019, p.3). EO and CE are tightly related but still seperate concepts . The former relates to the organizational attribute and does not address which forms entrepreneurial actions can take within the organizations . The latter, on the contrary, describes the entrepreneurial activities in terms of their specific forms, such as venturing, innovation, and strategic renewal Miller 1983). Data collection and cleansing We collected the bibliometric data from the Web of Science (WoS) by Clarivate Analytics, the Ebscohost Business Source Premiere Collection, and Google Scholar. Due to its broad coverage of publications in the social sciences (Norris and Oppenheim 2007), the WoS database is the most used data source for bibliometric analyses (Zupic and Čater 2015) and has also been used in entrepreneurship research (Cornelius et al. 2006;Hota et al. 2019;López-Fernández et al. 2016;Vallaster et al. 2019). We complemented the dataset from Ebscohost Business Source Premiere Collection, and Google Scholar, because the WoS, despite being comprehensive, does not cover all potentially relevant literature gaplessly. To cover the field of entrepreneurial activities employed by established corporations rather than startups, we broadly searched for the keywords "corp* entrep*", "corp* ventur*", "organi* entrep*", "firm-level* entrep*", "intrapreneur*", and "internal* entrep*" in titles, abstracts, and keywords. The asterisk (*) was used as a truncation symbol allowing to search for the terms with a different ending, e.g. entrepreneur, entrepreneurship, or entrepreneurial (Granados et al. 2011). Several of the concepts and sub-concepts addressed in the previous section could not be used as search terms as they are not specific enough to relate only to corporate entrepreneurship. For example, innovation has as much broader meaning and is also employed in startups. Similarly, strategic renewal does not necessarily require an entrepreneurial context. As strategic entrepreneurship and entrepreneurial orientation depict distinct research fields with their own literature base, we also dropped it as a search term. On entrepreneurial orientation, just recently, a separate bibliometric analysis was conducted (Wales et al. 2020). The search was restricted to articles, reviews, books, and book chapters written in English. We included books and book chapters, because they were the only source before the establishment of entrepreneurship journals, especially for European scholars (Aldrich 2012). We excluded publications from 2020 to have only full years of coverage. As they received no or only a few citations so far, they are not relevant for the co-citation analysis anyway. As WoS and Ebscohost both contain a very limited number of books and book chapters, we conducted a Google Scholar search to add the missing documents to the sample. Due to the limited options of exporting the bibliometric data from Google Scholar (Zupic and Čater 2015), limited search opportunities (e.g., lack of keyword field search), and a high number of "grey literature", the keyword search was only conducted in the title field of the documents. The results were exported with the help of Publish or Perish software (Harzing 2007) and were limited to the document types "Book" or "Book chapter". As a result of these searches, we retrieved 1512 documents from WoS, 1271 from Ebscohost, and 71 from Google Scholar. To ensure the quality of the dataset, first, we removed 449 duplicates. Second and as mentioned earlier, we also removed the 21 publications from 2020 to have only full years of coverage. Third, we applied a quality threshold by excluding all journals which were assigned to the third or fourth quartile of the Scimago Journal Rank (SJR). We opted for the SJR rather than the more common Journal Citation Reports (JCR), because the SJR covers more journals, which appear in our dataset, it weighs citations according to the "prestige" of the citing journal and excludes journal self-citations, and the evaluation period is three instead of two years (Falagas et al. 2008). This approach is justified by the maturing of entrepreneurship research (Aldrich 2012;Busenitz et al. 2003Busenitz et al. , 2014, which is evidenced by the increased percentage of entrepreneurship articles published in the leading business and management journals as well as by increased citation numbers of top entrepreneurship journals (Busenitz et al. 2014). Therefore, we expect the knowledge base of the research field to be mainly formed by works published in high-quality business, management, and entrepreneurship journals. As a result, another 52 papers were excluded. Third, we screened the titles, abstracts, and keywords of all remaining papers and dropped the ones which did not mainly focus on corporate entrepreneurship, such as academic entrepreneurship, social entrepreneurship, or product innovation (Hill and Georgoulas 2016;Zupic and Čater 2015). To ensure accurate results in the analysis stage, different spellings of the author names and journal titles were unified and the references to multiple editions of same book were merged (Zupic and Čater 2015). Finally, the dataset for our performance analysis contained 674 documents with 41,294 references. The performance analyses comprise the statistical evaluation of publication numbers of journals and authors to measure their productivity and the citation frequencies of journals, authors, and publications to measure their impact (White and McCain 1998;Yue and Wilson 2004;Zupic and Čater 2015). These analyses help to identify key research, examine the citation growth over time (Hota et al. 2019), and track major research direction changes (Pilkington and Meredith 2009). We used the software Bibexcel for these analyses. Co-citation analysis examines articles which jointly cite another article (Small 1973) in order to identify research themes within the CE field (Batistič and van der Laken 2019;López-Fernández et al. 2016). The higher the number of co-citations between two documents, the closer is their connection and the more likely they belong to the same research cluster (Crane 1972;Zupic and Čater 2015). We decided to employ a document rather than an author co-citation analysis because authors might contribute to different topics and schools of thought whereas a document usually has a stricter focus (Acedo et al. 2006;Gmür 2003;Hota et al. 2019). Co-citation data allows for the detection of the schools of thought (Pasadeos et al. 1998). We performed the following six steps: (1) selection of the unit of analysis; (2) retrieval of co-citation frequencies; (3) compilation of raw citation matrix; (4) normalization of raw citation matrix; (5) conducting multivariate analyses of the correlation matrix; and (6) validation and interpretation of the results (McCain 1990). Following this procedure, we further reduced the number of the most cited publications (Di Stefano et al. 2012;Pilkington and Meredith 2009;Schildt et al. 2006) to optimize its explanatory power (Grégoire et al. 2006;Lampe et al. 2019). To find the optimal sample size, we tested several thresholds for documents and references based on the stress value obtained from multidimensional scaling (Hota et al. 2019;Pilkington and Meredith 2009;Ramos-Rodríguez and Ruíz-Navarro 2004). McCain (1990) emphasizes the high noisiness of the co-citation data and suggests that stress values smaller than 0.2 combined with high values for R-squared are acceptable. As a result, the threshold was set to 31 citations, which resulted in 76 documents for the cocitation analysis, as it received the lowest stress value of 0.174, combined with a Dispersion Accounted For of 0.969 and Tucker's Coefficient of Congruence of 0.985, which indicate an acceptable goodness of fit. The co-citation frequencies for the 76 most cited documents (cf. section 3.1) were retrieved with the help of BibExcel, and the raw co-citation matrix was compiled. The 76 × 76 square symmetrical matrix contains the co-citation counts, which represent how many times each pair of documents were cited together (Di Stefano et al. 2012;Hota et al. 2019). Following White and Griffith (1981), the diagonal values in the matrix were calculated as the sum of the three highest co-citations for each document divided by two. In the next step, the raw co-citation data was normalized using Pearson's r correlation which allows the identification of the likeness of cocitation count profiles over all documents in the dataset (White and McCain 1998;Zupic and Čater 2015). The normalization was required because raw cocitation frequencies as simple similarity measures disregard different occurrence levels among items (Gmür 2003). The normalized correlation matrix was then used as an input for conducting multivariate analyses. We used SPSS to conduct an Exploratory Factor Analysis as a principal component analysis (Di Stefano et al. 2012;Vogel and Güttel 2013). Factor analysis can identify documents that load on more than one factor and, thus, allows for a better exploration of the documents that may serve as a bridge between different approaches (McCain 1990). Kaiser's criterion was used to define the number of factors extracted and Varimax rotation was applied to interpret the results. Documents which had loadings ≥0.4 on more than one factor, were assigned to the factor on which they loaded highest (Vogel and Güttel 2013). As researchers are advised not to solely rely on the results of a single clustering method (Zupic and Čater 2015), we triangulated the results with Multidimensional Scaling (MDS) (Di Stefano et al. 2012) using the SPSS scaling program PROXSCAL (Leydesdorff and Vaughan 2006). A bi-dimensional map was generated, where the heavily co-cited documents appear closer on the map (McCain 1990). High proximity of the papers within one group also indicates a high consistency of their conceptual domain (Di Stefano et al. 2012). We used UCINET NetDraw (Borgatti 2002) to visualize the co-citation results. Figure 1 demonstrates the temporal distribution of the 674 documents in the CE and related fields. The first publication on CE is "The Corporate Entrepreneur" by Lewis (1937). Since then and until the mid-1970s, the CE field didn't experience any significant development, as only two articles were published. The period from then until the early 1980s can be described as a development phase, with less than five articles published per year. The beginning of the introduction phase, where the scholars' and practitioners' interest in CE begins to grow, is marked by the release of the works by Schollhammer (1981Schollhammer ( , 1982, Burgelman (1983a, b, c), Kanter (1985), and Pinchot (1985). As a result of the discovered positive relation between CE and firm's performance, competitive position, and revitalization (Antoncic and Hisrich 2001;Zahra 1991), the field of CE starts to gather increased attention from both practitioners and scholars. Since the beginning of the 1990s the field is in its growth stage, reaching the first peak in 1999. The release of two special issues on CE by the journal Entrepreneurship Theory and Practice has contributed to this peak. Papers devoted specifically to CE make up 17 out of 25 papers published in 1999. Whereas the number of papers published on the topics of CV and intrapreneurship was growing since the middle of the 1980s, the development of SE as a field of research first emerged at the beginning of the 2000s. In general, the majority (73%) of the records in the field of CE was published in the last 15 years. Performance analyses The 674 documents in the sample were (co)authored by 1094 authors from 54 countries around the globe. In terms of country productivity (Fig. 2), US authors have contributed 336 documents in the sample, UK authors 69 and Spanish authors 53, whereas the contribution from the emerging countries was relatively smaller (Taiwan: 11, Turkey: 10, South Africa: 9, India: 5). Previous empirical findings (Cole and Phelan 1999;Gantman 2012;Schofer 2004) show that such over-representation of developed countries can reside in the fact that there is a positive effect of the country's economic development on its scientific output due to a wider resource availability and access. Another possible explanation could be the language barriers (Gantman 2012;La Madeleine 2007). Table 1 shows the 25 most cited works. The three journals Entrepreneurship Theory & Practice, Journal of Business Venturing and Strategic Management Journal account for 14 (64%) of the 22 journal articles. Apart from the two major entrepreneurship journals, the amount of papers in the Strategic Management Journal illustrates the relevance of CE for strategic management (Lampe et al. 2019) and indicates the maturity of (corporate) entrepreneurship due to the increased citations of top entrepreneurship journals (Busenitz et al. 2014). Table 2 shows the most cited as well as most productive authors in the field. In terms of the author productivity, 26 of the most productive authors have (co)authored 266 works that account for 39.5% of all the documents in the sample. The h-index depicts the number of an author's articles that received at least the same number of citations (Hirsch 2005). The h-indices in the table refer only to the author's publications on a CE topic. Most authors' overall h-indices are higher. We apply Lotka's (1926) law to examine if the productivity of CE research is based on a limited number of authors. By taking the number of authors that have contributed to a single study, Lotka's law allows to predict how many authors would have published n articles. Following the recommendation of Andrés (2009), the small group of four prolific authors contributing a very high number of papers was excluded from the calculation in order to not overestimate the results. The authors' productivity in the dataset examined fits Lotka's law. Compared to other research fields like data mining with n = 3.629 (Tsai 2012) or psychology in tourism with n = 3.26 (Barrios et al. 2008), the obtained n = 2.635 demonstrates that there is a greater concentration of papers in a fewer number of prolific authors in the CE field. Table 3 gives an overview of the most influential journals for CE. The list is predominated by entrepreneurship or management journals, which proves that the formation of the field of CE took place at the intersection of strategic management Co-citation analysis Our factor analysis resulted in five factors explaining 95.3% of variance indicating an information loss of only 4.7%. An in-depth review of the documents assigned to the five factors revealed that Factors 3, 4, and 5 form individual research clusters, whereas the documents attributed to Factors 1 and 2 need a further segmentation. The clustering is depicted in Table 4. The content analysis revealed research clusters, which are ordered according to their appearance in the list in the following. The first subfield (1a -Internal and External Determinants of CE Performance) is formed by 21 papers loading on Factor 1 and examines internal (e.g. structure) and external (e.g. industry) determinants that influence entrepreneurship in established firms as well as their performance. Some scholars have highlighted that high dynamism and hostility of external environment contributes to the intensification of CE (Stopford and Baden-Fuller 1994;Zahra 1991;Zahra and Covin 1995). Others have focused on internal organizational factors that can foster the entrepreneurial activity, such as ownership and governance structures (Zahra 1996;Zahra and Garvis 2000; or resource availability and top management support (Hornsby et al. 2002). Earlier works found in this subfield are devoted to the question what determines and promotes entrepreneurship in different types of firms (Miller 1983;Miller and Friesen 1982). Later works examine the correlation between organizational factors and agency problems that affect entrepreneurial behavior within an organization (Jones and Butler 1992) and focus on the link between human resource management practices and CE (Hayton, 2005;Hayton and Kelley 2006). In general, 10 documents belonging to this subfield were found amongst the 25 most cited works in this field of CE, which suggests that the studies focusing on the link between external and internal determinants, organizational entrepreneurship and firm performance have received an increased attention (Lampe et al. 2019). The second subfield (1b -Construct Exploration) includes 12 documents loading on Factor 1 that explore various constructs in the field of entrepreneurial organizations, such as CE (Covin and Miles 1999;Stevenson and Jarillo 1990), entrepreneurial orientation (Lumpkin and Dess 1996), and intrapreneurship Hisrich 2001, 2003;Pinchot 1985). Other works in this subgroup are summarizing prior research findings, defining state of research and suggesting further research directions (Dess et al. 2003;Guth and Ginsberg 1990;Phan et al. 2009;Zahra et al. 1999). Eight papers also loading on Factor 1 form the third subfield (1c -External Corporate Venturing). These papers focus specifically on the form of external CV such as corporate venture capital (CVC) investments (Chesbrough 2002;Lenox 2005a, 2006;Siegel et al. 1988) and on knowledge creation through external venturing activity (Dushnitsky and Lenox 2005b;Schildt et al. 2005;Wadhwa and Kotha 2006). The papers display negative factor loadings, which indicates that given documents possess a reverse co-citation profile, meaning that whenever a document cites a paper positively loading on the given factor, it will unlikely co-cite the papers having negative loadings on the same factor (Acedo et al. 2006). This suggests the divergence in the theoretical developments or topics discussed by the papers with positive and negative loadings (Acedo et al. 2006). Indeed, some scholars (Narayanan et al. 2009;Lampe et al. 2019) emphasize that works on CVC activities of incumbent companies have rarely been connected to the broader field of CV. In addition, Phan et al. (2009) note that the application possibilities of the theories of radical innovation and venture capital to the fields of CE might be limited. The fourth subfield (1d -Theoretical Foundations) encompasses four documents loading on Factor 1, which serve as theoretical foundations for the development of the field of CE. Schumpeter's work (1934) "The Theory of Economic Development", which discusses the role of the entrepreneur in driving innovations, is considered to belong to the core entrepreneurship works (Ferreira et al. 2015;Hota et al. 2019). Two other documents in this subfield contributed to the development of resource-based view (RBV). Although scholars recognize that there is a need for contextualization of the RBV, particularly for entrepreneurship research (Kellermanns et al. 2016;Siqueira and Bruton 2010), Barney's (1991) seminal work on RBV still has a significant influence on the field of CE. RBV and its extensions, such as knowledge-based view (Grant 1996;Kogut and Zander 1992) and dynamic capabilities (Eisenhardt and Martin 2000;Teece et al. 1997), can be considered as a predominant theoretical perspective adopted by scholars in the field of CE, indicating the focus of the research on the contributions of entrepreneurial activities to the development of a firm's strategic resources, competences and capabilities (Ferreira et al. 2015). Kanter's work (1985), which is also identified in this subfield, discusses the obstacles in the innovation and venture development process in established firms and loads positively on Factor 1, 2 and 3. This indicates that this work might serve as a bridge between two or more approaches (Acedo et al. 2006). The last two papers that load on Factor 1 form the fifth subfield (1e -Research Problems). They focus on the problems of common method bias (Podsakoff et al., 2003) and provide a distinction between the terms mediator and moderator variable (Baron and Kenny 1986). The sixth subfield (2a -Linking Entrepreneurship and Strategy) covers five papers loading on Factor 2. This subfield focuses on the firms' value creation and on sustaining competitive advantages through SE as the integration of the fields of entrepreneurship and strategic management (Hitt et al. 2001). Ireland et al. (2003) elaborate the theoretical framework of SE, which involves advantage-seeking (strategic management perspective) and opportunity-seeking behaviors (entrepreneurship perspective), while Hitt et al. (2001) explore SE in various organizational domains. Furthermore, Covin and Miles (2007) empirically investigate the relation between CV and business strategy. Two papers examine the manager's role in the process of strategic renewal (Floyd and Lane 2000) or CE (Hornsby et al. 2009). The seventh subfield (2b -Construct Refinement) includes four papers loading on Factor 2. It thematizes the growing maturity of CE as a field of research, which, on the one hand, leads to the urgency of domain definition and reconciliation of various existing terms and definitions (Sharma and Chrisman 1999), and on the other hand, triggers the emergence of entrepreneurial education (Kuratko 2005). Another work devoted to the synthesis of prior research and integration of its key findings in a narrower field of CV is the paper by Narayanan et al. (2009). A paper by Miles and Covin (2002) also explores the domain of CV and suggests the typology of CV forms by the focus of entrepreneurship and the presence of investment intermediation. Only one paper (Eisenhardt 1989) loading on Factor 2 forms the eighth subfield (2c -Case Study Research Method). It discusses the use of case studies as a research method and thus, is not specifically related to the field of CE. However, the high citations of this paper indicate that CE research often uses case study methods. The ninth subfield (3 -CE, Structure and Empowerment) encompasses all documents, which load on Factor 3. Interestingly, seven works in this subfield were written between 1983 and 1986. Five of these works are authored by Burgelman (1983aBurgelman ( , b, c, 1984Burgelman and Sayles 1986). Drawing attention to the fact that the Schumpeterian distinction between entrepreneurial and bureaucratic economic activity becomes outdated, Burgelman (1983c) emphasizes that for entrepreneurial success, organizations need to experiment with various organizational forms and new resource combinations. The idea that innovation and entrepreneurship processes should be organized as a systematic and rational process fostered by management is also supported by Drucker (1985). In addition, Burgelman (1983a, b, c) claims that to achieve entrepreneurial success, organizations need to unlock the entrepreneurial potential on the operational levels by promoting autonomous strategic behavior. Kanter (1985) adheres to the same idea and suggests that people at all organizational levels should be empowered with information, resources and support to innovate inside of organizations. The tenth subfield (4 -Organizational Learning and Dynamic Capabilities) comprises 10 documents, which load on Factor 4. It focuses on the knowledgebased view of a firm (Grant 1996;Kogut and Zander 1992) and dynamic capabilities (Eisenhardt and Martin 2000;Teece et al. 1997). Two documents include the work on behavioral theory of the firm (Cyert and March 1963) and evolutionary theory of the firm (Nelson and Winter 1982). Both of these works have served as a starting point for the development of dynamic capabilities concept (Pierce et al. 2008). Strongly intertwined with the knowledge-based view are the concepts of absorptive capacity (Cohen and Levinthal 1990;Zahra and George 2002) and organizational learning (Ahuja and Lampert 2001;March 1991) in four documents. Finally, eleventh subfield contains the book by Block and MacMillan (1993), which is the only document loading on Factor 5. It discusses how established firms innovate through CV activities. The results of the Multidimensional Scaling show a high consistency with the findings of factor analysis. Figure 3 depicts the graphical clustering in five groups, which resemble the factors identified by the factor analysis. In order to compare the results of MDS and factor analysis, the factors were highlighted on the MDS map. The majority of the papers that load on Factor 1 are clustered very tightly, which suggests that the documents in the group have similar co-citation profiles. Cluster 1c (External Corporate Venturing), which encompasses the papers that have negative loadings on Factor 1, is placed in the upper part of the map (F1-subfield 3). The documents that load positively on Factor 1 are located in the bottom part of the map (F1-clusters 1a, 1b, 1d, 1e). It supports the results of the Factor analysis and demonstrates that the field of CVC develops independently from the fields of CE and CV. At the same time, the external CV and CVC group is located in close proximity to the documents from Factor 4, specifically to the documents that focus on the topics of absorptive capacity, organizational learning and knowledge-based view. This shows that scholars show interest in the external CV and CVC activity from the perspective of knowledge creation for corporate investors (Dushnitsky and Lenox 2005a, b;Wadhwa and Kotha 2006) and examine different types of CVC investments as avenues for interorganizational learning (Schildt et al. 2005). Moreover, the MDS map demonstrates that the documents loading on the Factor 1 and 2 cluster tightly, which serves as an indicator of their conceptual proximity. This also becomes evident through examination of the secondary factor loadings of the documents found in these groups, which appear to be bidirectional. The presence of the same authors in the groups loading on Factor 1 and 2, such as Hornsby, Kuratko and Ireland, can be one possible explanation of the conceptual proximity between the documents in these groups. These authors could have been continuously working on the development of same research domains and based their later works on the concepts and ideas from the earlier ones. Overall, the co-citation-based clustering of the documents is consistent with the way the CE concept has evolved in the last five decades (Kuratko 2010;Kuratko and Audretsch 2013;Kuratko et al. 2015). Whilst in the 1980s, CE primarily focused on the necessity of resource commitments and sanctions from the organizations (cluster 3), the exploration of the influence of CE on the firm's performance attracted scholar's attention in the 1990s (clusters 1a and 1b) (Kuratko and Audretsch 2013). In the 2000s, scholars have mainly dealt with the questions of building sustainable competitive advantages through CE and analyzed the strategic aspects of CE and its manifestations (clusters 2a and 2b) (Kuratko and Audretsch 2013). The co-citation map of the CE and related fields is depicted as an additional visualization of the findings in Fig. 4. The bigger the size of the node, the higher is the number of citations received by the document. The proximity between the documents represents the similarity between them and each link between two nodes represents the number of times these documents were co-cited. For better visualization purposes only links that represent co-citations higher than 10 are visible on the graph. Discussion and future research opportunities Overall, the field of CE has evolved significantly over the last decades and its maturity as a research field has been continuously growing as evidenced by the prevalence of entrepreneurship journals among the top cited and publishing journals (Busenitz et al. 2014), by the increasing average number of authors per publication (Lipetz 1999;Serenko et al. 2010) as well as by numerous attempts of scholars to resolve definitional issues in the field and define its boundaries. The intellectual structure of the field demonstrates theoretical diversity based on various management theories, such as resource-based view, knowledge-based view, organizational learning, and dynamic capabilities. However, some authors call for a greater engagement of CE research with other theoretical perspectives, such as transaction cost economics, real options, institutional theories and social network theory (Corbett et al. 2013;Hoskisson et al. 2011;Lampe et al. 2019;Narayanan et al. 2009). Other scholars emphasize that drawing from the established theories might prevent researchers from creative theory building, which can enrich the field of CE (Zahra 2005). This discussion indicates that the field of CE still has many opportunities to build a stronger theoretical and empirical foundation. Our analyses have shown that external corporate entrepreneurship is largely unrelated to the other CE sub-clusters. We propose further research related to the largely overseen intersection and its embedded dynamics between internal and external CE. Firms might have a portfolio of corporate venture project that they pursue internal or external organizational forms for which they might choose specific spatial solutions that can be located at the firm premises or outside in incubator settings. Future research might analyze the portfolio configuration associated with the more internal or external governance of venture units and or the spatial setting. The portfolio exploration might tie in characteristics that have been considered in the resource-based view, such as degree of relatedness of resources of the parent firm and the venture. Internal and external corporate venturing might be largely a question of autonomy of the new venture by resource endowments, operations, and strategic decisions. The considering of autonomy and of the venture stage might bridge internal and external corporate venturing and better explain the evolution of corporate entrepreneurship, which is poorly understood so far . Research on these criteria and the portfolio might inform research on corporate parenting styles simultaneously (Goold et al. 1998;Nilsson 2000). Additionally, future research in this direction bring in more spatial and regional aspects which are particularly important when the knowledge needed for the venture evolution is sticky or embedded in a local space (Mudambi et al. 2018). The consideration of local contexts also ties in with novel developments with respect to the use of makerspaces and coworking-spaces by incumbent firms (Halbinger 2018;Waters-Lynch and Duff 2019). For improving the innovation potential, incumbent firms tend to use makerspaces and coworking-spaces of external providers or to establish internal coworking-spaces in which they allocate entrepreneurial or innovative projects (Spreitzer et al. 2015;Bouncken et al. 2020). Research in this area, ties in with previous CE research on knowledge, learning, dynamic capabilities, and also on empowerment and new organizational forms for CE (see subfield 9). The little knowledge about the evolution and performance of such projects in these dedicated locations and about their interdependencies or autonomy with an incumbent as the parent organization shapes an interesting path for future research. In extension to the possible dynamics among internal and external corporate venturing and the learning focus of CE research, we advocate for future research on the evolution of venture units. Internal corporate venture units are founded by incumbent firm. While they have some autonomy, managers from the incumbent might still influence ideas and decisions of the venture unit (Gard et al. 2018). The autonomy and the location of the venture (at the firm promises, outside the premises, local embedding, etc.) will influence the venture evolution. Future research could examine the dynamics of internal and external corporate venturing and the conditions of venture evolution itself. These conditions might be affected by the resource connections between the venture unit and the parent and the spatial or local setting of the venture unit. The analysis of the venture evolution and its organizational and local embedding relate to the important notion that internal corporate venture projects are dependent on the corporate parents not only regarding the initial resources and knowledge and support (Govindarajan and Kopalle 2006). Different forms of spatial or local integration might come with different resources, knowledge, and support. For example, coworking-spaces might create a sense of community that brings social-emotional support to the entrepreneurs (Garrett et al. 2017). Considering the spatial setting in maker-spaces or coworking-spaces might also inform a better understanding of learning-by-doing which has been reported as a core driver of change related to internal CE (Block 1982;Garrett and Covin 2015). Research on the location of CE connects to the question of the unit of the analysis of CE. Previous research mainly focuses on CE within large established corporations (Birkinshaw 1997(Birkinshaw , 2014Halme et al. 2012;van Rensburg 2012;Zahra et al. 2013) and primarily on companies operating in the manufacturing sector (Cucculelli and Bettinelli 2015;Jones 2005;Wadhwa and Kotha 2006). Future research could consider other sectors of the economy, especially the service sector (Rogan and Mors 2017) and put more emphasis on small and medium-sized firms (Kearney and Morris 2015) and on its international stretch. The globalization of the world's economy, the growing recognition of "born global" firms (Zahra et al. 2013) as well as the search of international competitive advantages through entrepreneurial behaviors (Simon 1996) have led to the creation of a new research domain of international entrepreneurship. This research field was initially formed at the intersection of the fields of international business and entrepreneurship (McDougall and Oviatt 2000) and continued to emerge as a separate field, which shifted the scholarly attention away from the topics around international CE (Zahra et al. 2013). Therefore, the amount of scholarly work devoted to the topics of international CE activity remains scarce. We follow Lampe et al. (2019) and Zahra et al. (2013) in the proposition that future research could explore the influence of institutional characteristics of different international settings on the form and success of CE or could focus on examination of the role that national cultures play in the CE activity. Furthermore, it would be also interesting to understand the differences of how various types of firms (e.g. family-owned firms) approach international CE. Conclusion Our bibliometric analysis assessed performance indicators and revealed research themes within the CE field. In essence, our contribution lies in providing insights about the ongoing discussions in the field as well as about shifts in research foci, whilst enabling other researchers to contribute to the field in a more effective manner. Although much work has already been accomplished in this field, scholars still have opportunities to build a stronger theoretical and empirical foundations within each of the subfields and to conduct exploratory research to further advance the field and extend its theoretical grounding. We suggest future research on the dynamics of CE on the evolution of venture units (perhaps morphing from internal to external ones), its factors and its location. We assume that autonomy and the support (knowledge, social networks) shapes strong influences as does the location of the venture unit e.g. within a coworking-space, so addressing of international entrepreneurship. Despite these contributions, this study faces several limitations. First, as bibliometric analyses are based on citation data, they favor older over newer publications, which did not have sufficient time to accumulate citations (Zupic and Čater 2015). As a consequence, bibliometric analyses cannot assess the relevance and impact of newer publications adequately. Second, the assumption that citations objectively reflect ascribed relevance is questionable. For example, the so-called Matthew effect describes that highly cited articles are partly cited due to their previous high citations in the past (García-Lillo et al. 2017). Self-legitimization strategies, micropolitics, or citation cartels (Vogel and Güttel 2013) as well as the prestige of the journal where a paper was published (Hota et al. 2019) could direct citing behavior. Such biased citations would ascribe relevance to papers, which might only contribute to a limited extent. Third, while the clusters, which resulted from the factor analysis and the MDS, are objective, the assignment of research themes is rather subjective. This becomes particularly obvious for the clusters 1 and 2, which consist of several sub-clusters, which cannot be explained by statistical means, but are based on a content analysis, which searches for a common thread in the sub-samples. However, also the remaining, more consistent clusters are somewhat blurry as they contain publications with divergent themes. Therefore, both labeling clusters and forming sub-clusters are interpretive tasks. Due to this partial subjectivity, other researchers might have assigned different research themes to the (sub-)clusters (Tiberius et al. 2020b). Fourth, the application of several thresholds, reduces the dataset, excluding a large part of extant research. Although we followed the established guidelines to identify the threshold values and applied different thresholds to test the robustness of the results, the limited dataset does not represent the whole body of research. Fifth, the distinctiveness of the factors obtained in the factor analysis may be overstated due to the fact that only highest factor loadings were considered when assigning documents to the factors. We tried to minimize this bias by examining crossfactor loadings of some documents and identifying them as bridges between different streams of research. Sixth, although we did not limit our scope to the records obtained only from the Web of Science and did not focus exclusively on journal articles, our data collection was still limited to the sources written in English language. In addition, we have examined only journal articles published in the journals of the first and second SJR quartile. This decision was justified by the purpose of focusing on the most influential works and journals in the field. Further studies may extend the dataset by including the journals of lower quartiles or sources in other languages than English in the sample. Funding Open Access funding enabled and organized by Projekt DEAL. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
9,752.4
2021-02-08T00:00:00.000
[ "Business", "Economics" ]
Free Vibration Analysis of Single-Walled Carbon Nanotubes Based on the Continuum Finite Element Method This paper presents a continuum finite element mechanics approach to model the vibration behaviours of singlewalled carbon nanotubes (SWCNTs) of varying lengths, aspect ratios, chiralities, boundary conditions, axial loads and with initial strain applied. The results are in good agreement with the open literature and show that resonancebased carbon nanotubes sensors have the potential to meet the high level performance requirements inherent of many sensor based applications such as mass detectors, biomedical sensors, monitoring for metal deposition and chemical reactions amongst others. Currently, the sensitivity of many electromechanical transducers used for these applications have reached their respective theoretical limit. The merit of carbon nanotubes is that, due to their miniature dimensional structures, the sensitivity of these sensor based applications is vastly improved. Introduction Carbon nanotubes (CNTs) are inherent of some of the most unique chemical and physical structures, conferring them very special mechanical, chemical, optical, thermal and electrical properties. CNTs therefore have a wide ranging application prospect [1][2][3]. This extends to newer pioneer structural materials in particular because amongst the excellent mechanical characteristics that CNTs possess include high elastic modulus, high toughness, high strength and a low density. In this way, they are very suitable and can be an effective component in NEMS/MEMS, as ultra-high frequency resonators and in composite materials [4,5]. Owing to their small sizes, single-walled carbon nanotubes (SWCNTs) have a large surface area to volume ratio and on external application of a mechanical deformation, they can respond quickly with a high sensitivity. Together with an ultra-high natural frequency inherent of SWCNTs, they can aid in the design and development of the fastest ever scanning probes, resonant magnetic force microscopes and even high-clocked supercomputers. While the mechanical properties results for SWCNTs and in particular the elastic modulus results predicted through research have shown wide acceptance, the predicted vibration behaviours are presented by varying models and theories and these can vary considerably. With regards to this, a simple yet reliable dynamic model is presented to analyze the vibration behavior of SWCNTs. The use of experimental measurement methodology is a very costly approach and the results can often be very scattered because of the small size of SWCNTs. On the other hand, the analytical analysis approach is more cost-effective but is mostly applied to simple-structured CNTs showing simple mode deformations. In this way, numerical computer based approach is a powerful and efficient way for carrying out the analysis of the dynamic behaviours of CNTs in various configurations of parameters and applied boundary conditions [4]. Numerical models comprise of molecular dynamics and continuum mechanics models. Continuum mechanics analysis is amongst the simplest yet most efficient approach to study the static and dynamic behaviours of CNTs as compared to other such models so far. A space-frame structured model was proposed by Li and Chou [6] and implemented in this study to analyze dynamic vibration behaviours of SWCNTs with distinct lengths, chiralities, aspect ratios, boundary conditions, axial loads and with initial strain applied. The outcome shows that the results of this study are in agreement with the solutions obtained from the more sophisticated and complex molecular dynamics approach models. Structure and Properties of Carbon Nanotubes Two kinds of CNTs exist; these are single walled carbon nanotubes (SWCNTs) and multi-walled carbon nanotubes (MWCNTs). MWCNTs are SWCNTs with different radii that are coaxially interposed. The simplest way to imagine SWCNTs is by making reference to a rolled up grapheme sheet to produce a hollow cylinder but with end caps. Hexagonal carbon structures make up the cylinder, while pentagonal carbon structures make the end caps. Periodical repletion of the hexagonal patterns leads to the connection of every carbon atom to three other neighboring atoms. This connection is called a covalent bond and it confers the structure its impressive mechanical characteristics by virtue of being a strong chemical bond. Chirality, which is characterized by a chiral vector C n and a chiral angle θ, determines which atomic structure the CNT takes. The chiral vector is the line joining crystallographically equivalent parallel regions of O and C atom in a flat 2D grapheme sheet ( Figure 1). Lattice translation indices (n,m) together with the vectors α 1 and α 2 in the hexagonal lattice, defines the C h vector with the equivalent equation being The angle this chiral vector makes relative to a zigzag direction (n,0) defines the chiral angle θ. Armchair nanotubes are formed when the chiral angle is equivalent to 0° whilst zigzag nanotubes occur at a chiral angle of 30°. They both are limiting cases with (n,n) characterising armchair nanotubes while (n,0) corresponds to zigzag ones. When the chiral angles are not 0°or 30°, chiral nanotubes are formed and they bear the indices (n,m) where n≠m ( Figure 2). Structural Continuum Mechanics Model In the carbon nanotubes geometrical structure, neighboring carbon atoms are interconnected by covalent bonds such that they form the hexagonal lattice structure at the walls. In a 3D space, these have a certain characteristic bond length as well as bond angles ( Figure 3). On application of an external force, the covalent bonds constraint the motion of individual atoms of CNT. Thus the net deformation is due to the interactions of these covalent bonds. By applying an analogy between these bonds and connecting elements occurring between neighboring carbon atoms, a CNT can be regarded as a finite element space frame structure. The individual atoms would then act as joints between connected elements. The correlation linking the parameters of sectional stiffness of structural mechanics to the force constants of molecular mechanics is then established. The bond sections are assumed to be uniformly circular in shape and identical [6]. This leads to l x = l y = l with only the stiffness parameter values 'EA', 'EI' and 'GJ' are required to be solved for. Since deformation of the space frame model causes a change in the strain energies, the stiffness parameter values can be determined through energy equivalence. Then, as per structural mechanics [6,7] theory, for a uniform beam of length L with an axial force N acting on it, the strain energy I given by: where ∆L represents deformation due to axial stretching. For a uniform beam under bending moment M (Figure 4), the strain energy is given by: Where a represents the rotational angle at the beam endings. For a uniform beam under torsion T, where ∆β represents the relative rotations at the beam endings. Also merging dihedral angular torsions and improper torsions in a singular term, the simple harmonic forms are given as follows: where k r is the force constant for bond stretching, k θ is the force constant for bending of the bond angles and k τ is the torsional resistance. Δr, Δθ and Δφ are the increment in bond stretching, change in bond stretching, change in bond angle and the change in angle for the case of bond twisting, respectively. From equations (1) to (6), the stretching energies are given by U r and U A . U θ and U M are the bending energies and U_τ and U T are the torsional energy. It could then be assumed that the angle of rotation 2α is equal to the net change Δθ of the bond angles, ΔL is equal to Δr and Δβ is equal to Δφ. Comparing the equations, the structural mechanics values 'EA', 'El' and 'GJ' and the molecular mechanics parameters k r, k θ, k τ and k τ , it can be found that , , Equation (7) illustrates the application of structural mechanics to modeling for the CNT. Therefore, if values of k r, k θ, and k τ are known, the stiffness parameters 'EA', 'EI' and 'GJ' can be found easily. Then by making use of the procedure to find the stiffness matrix for framelike structures, determination of static and dynamic responses can be appropriately simulated. As mentioned earlier, carbon nanotubes consist of atoms interconnected by covalent bonds to form a hexagonal lattice structure. They can be characterised by a certain bond length a c-c and a bond angle in a three-dimensional space. Therefore the bonds constrain the displacement of these individual atoms while they are subjected to vibration dynamics. In this way, the bonds can be considered as interconnected load-carrying elements while the atoms as joints. This in effect makes CNTs analogous to space-frame structures. Considering the SWCNTs as space-frame structured equivalent models to solve for natural frequencies under free undamped vibration, the equation of motion takes the following form where [M] is the global mass matrix, [K] is the stiffness matrix representing nodal displacement and acceleration vectors respectively. Neglecting the electron masses, the carbon nuclei mass (m c = 1.993×10 -26 kg) is assumed to be effectively concentrated at the atomic centers. In this way the inertial atomistic property of CNTs is taken into account. By using this methodology in the project, basic structural mechanics could determine the mechanical behaviour of CNTs. Using the ANSYS finite element module, three-dimensional finite element models of CNTs are created for analysis in this project. Beam188 3D elastic ANSYS elements are used to mimic the bonds. These are uniaxial and allow for compression, bending as well as torsional characteristics. Each of them have six degrees of freedom at the node allowing for x, y, z directional translations as well as rotation along these axes. These elements can be described as having a certain cross-sectional area, two moments of inertia, the inherent material properties, and constrained by 2 to 3 nodes. The hexagonal nanostructure present in CNTs ( Figure 5) can thus be simulated as a space-frame structure. An extension to this approach allows for the simulation of the entire CNT lattice structure. The length of the element L corresponds to the bond length, α c-c and the thickness of the element represents the wall thickness t, which by assuming the cross-sectional profile of the element to be circular, is analogous to the element diameter d. While CNTs occur as hollow tubes with closed end-caps at their extremities, open ended CNTs would be modelled so as to simplify the analysis. Using ANSYS macro language a cyclic procedure was established for generation. This is carried out by inputting individual Cartesian coordinates of atoms of carbon through using JCRYSTAL © software to determine exact values and inputting the value of the bond length at the equilibrium state. Following this methodology, the carbon atom coordinates generate the nodes in the finite element model and eventually the beam elements between them. These equivalent space-frame structures were then solved to find their fundamental frequencies. Figure 6 depicts the side lateral views of the finite element meshes armchair, zigzag and chiral single-walled carbon nanotubes. Structural Continuum Mechanics Model Accurate theoretical analysis models are very important when determining the fundamental frequencies and associated mode shapes for several reasons. When it comes to nano-mechanical resonators for example, the fundamental oscillation frequencies are key attributes of the associated resonator. In addition to that, the natural frequency and mode shape results, if an adequately accurate theoretical analysis model is used, allows the determination of key mechanical characteristics of CNTs such as the elastic modulus. In this section, the effects of varying length, aspect ratio, chirality, boundary conditions, axial loads and initial strain on the fundamental frequencies of SWCNTs have been studied and compared quantitatively with molecular dynamics results as well to vouch the accuracy. All frequency results presented in this study are in THz unless otherwise stated. Length Length is one of the key attributes that affect the natural frequencies of a SWCNT. An armchair type (10,10) and zigzag (17, 0) SWCNT with vary aspect ratios from 8.28 to 39.1 were analyzed. These chiralities have been applied so that their diameters would be approximately the same. The first vibration mode frequencies were extracted for each of them and the results were tabulated and compared with the literature values [8]. Overall it can be seen that an increase in the length causes the natural frequencies for the first mode to decrease which is agreeable to the open literature. Figures 7 and 8 illustrate how the equivalent spaceframe beam model performs against molecular dynamics results. It can be seen that the values are nearer to the MD results when the length-todiameter ratio is greater than 10.07. Also the difference in chirality does not affect the fundamental frequencies considerably. The relatively slightly higher values for the armchair (10, 10) SWCNT compared to the zigzag (17, 0) one may be accounted by virtue of the chiralitydependent anisotropic nature of the carbon nanotubes structure. Aspect ratio As depicted in Figures 7 and 8, an increase in the length to diameter ratio causes a decrease in the fundamental frequencies of the SWCNTs. But what is also prevalent, is that the aspect ratio is a key attribute in determining whether the SWCNT takes a bending or breathing mode shape. Figure 9 illustrates the first five resulting mode shapes for a SWCNT of length 70 Å and chirality of (7, 7). The first and second mode shapes take both a half-sine wave shape while the last three take the shape of a breathing wave. Increasing the aspect ratio by simulating a (7, 7) SWCNT of length 50Å shows different results with the higher modes bearing breathing wave shaped modes. These simulations show a trend whereby increasing the SWCNT length causes breathing wave mode shapes to be weakened and the bending wave mode shapes to become more dominant as shown in Figure 10. the chirality of SWCNTs on the resulting fundamental frequencies. The chiralities applied in Table 1 have been chosen so that the SWCNTs have roughly the same diameter. It can be seen from Tables 1 and 2 that the natural frequencies obtained are approximately the same, that is, the results are independent of the chiral angle magnitudes to a considerable degree. This agrees with the observations by Li and Chou [4] who used the molecular dynamics approach. Chirality Next, the mode shapes of SWCNTs with different chiralities were investigated. Figure 11 illustrates the first three mode shapes of chiral and zigzag SWCNTs. It can be seen that the chiral and zigzag modeshapes are interchanged from breathing to bending largely due to a change in length, further showing the insensitivity of chirality but greater sensitivity to the length-to-diameter ratio. These CNT mode shapes are very important criteria to be considered when designing CNT associated devices. Boundary conditions The natural frequencies of armchair SWCNTs with a chirality of (5,5) with a clamped-free (CF) boundary conditions and clampedclamped (CC) conditions were investigated. Figure 12 illustrates these sets of conditions. The results for the clamped-free conditions show that there is a smooth decrease in the natural frequencies as the aspect ratios increase incrementally for each mode and that the higher the modes, the higher are the resulting frequencies as shown in Figure 13. Since these SWCNTs have almost identical diameters, the inverse proportionality between the fundamental frequencies and aspect ratio implies that a longer SWCNT tends to show decreased sensitivity to the resulting frequencies. The results are in good agreement with the MD values in the open literature showing the same trend, with the fundamental frequencies nearer to MD results as from an aspect ratio of 7.55 onwards. Clamped grapheme sheets and CNTs are often the configuration of choice for studying the mechanical properties of carbon nanotubes both with atomistic approach and experimental studies. One the other hand, clamped-clamped SWCNTs configurations are principally applied to nanostrain sensors as well as micro-oscillators. For comparison reasons, the same (5, 5) armchair SWCNT for CF conditions was applied to the clamped-clamped condition. Figures 14 and 15 shows the obtained frequency results and comparison with molecular dynamics results. As with CF condition, the SWCNTs tend to show inverse proportionality between the natural frequencies and corresponding aspect ratio. However the introduction of the new boundary condition shows much higher frequency results compared to the CF condition which is definitely an asset for CNT-based micro-oscillators. Initial strain The natural frequencies of CNTs display high sensitivity to externally applied loads. Resonant strain sensors based on CNTs make use of this principle. It is therefore important to find the relationship between resonant frequencies of CNTs and stress conditions applied. The vibration behavior when a compressive or tensile initial strain is present is essential to fulfill CNTs' prospect as nanosensors. The simulations are carried out on a SWCNT having a chirality of (5, 5) and aspect ratio of 13.89, with tensile or compressive strains applied to determine its response towards resonant frequencies. The obtained results (Figures 16 and 17) show the same trend as the MD results. Figure 18 shows that initial tensile strains tend to cause an increase in the frequencies while initial compressive strains tend to cause a decrease of the frequencies. This agrees with continuum vibration theory predictions [8]. In this paper, the free vibration analysis of SWCNTs based on the continuum finite element model, with varying lengths, aspect ratios, chiralities, boundary conditions, axial loads and initial strain condition were performed. It was found that an increase in the length or aspect ratio would cause a smooth decrease in the fundamental frequencies and that the higher frequencies occurred at higher modes of vibration. It was also observed that the aspect ratio is a key element in determining whether the vibration mode took a bending or breathing wave shape. The results have shown that increasing the aspect ratio would cause breathing mode shapes to weaken while bending or half-sine wave mode shapes to become more prominent. Since these mode shapes are key criteria when designing CNT associated devices, it highlights the reason as to why in nanotubes synthesis, uniformity control is so important, especially for CNT based vibration sensors and resonators. The results for chirality showed that a change in the chiral angle magnitude would not cause any considerable change in the results and it is agreeable to the open literature. For a clamped-free SWCNT, an inverse proportionality relationship was observed between the resulting fundamental frequency and aspect ratio. The results took a similar trend for clamped-clamped SWCNTs but much higher frequency values were obtained in this case and this is definitely an asset for CNT-based micro-oscillators. Moreover the application of an axial load in the midsection of SWCNTs showed that the logarithmic relationship between frequencies and aspect ratio could be used as mass sensing devices for quantifying nano-particle size. Shorter size CNTs tend to have increased sensitivity. The introduction of initial strains show that initial compressive strains lead to a reduction in frequencies while initial tensile strains resulted in an increase in frequencies, which agrees well with continuum vibration theory. Overall, most of the results are in good agreement with the more complex and sophisticated molecular dynamics literature results and in terms of relative simplicity and efficiency the equivalent space-frame finite element model used performed aptly.
4,410
2015-01-16T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Clinical Features And Neuroimaging Findings In Acute Cerebral Infarction Patients Using RAPID Articial Intelligent (RAPID AI) Software Analysis – A Series Of 54 Cases Background: Stroke is the second leading cause of death and the leading cause of permanent disability globally. Vietnam is a developing country with a high prevalence of stroke but is under-resourced in terms of specialist staff able to interpret complex brain imaging. Methods: A case series of 54 stroke patients admitted between October 2019 and October 2020 where thrombectomy was being considered and where ‘Rapid’ Articial Intelligence (AI) was used to analyze images of stroke. Results: The mean age of patients was 73.39 ± 12.46 years with 57% male. The most common risk factors were (76%), atrial brillation (24%), diabetes (20%), alcohol (15%), and smoking (9%). The most common clinical signs were hemiparesis in 76% of the patients, followed by dysphasia in 50% and memory loss in 28% of the sample. 7% presented with dizziness and 7% with headache. 6% were unconscious on admission. ASPECTS evaluation showed that 24 (44%) patients had good ASPECTS scores of 8-10, 17 (32%) patients had ASPECTS scores of 5-7, and 13 (24%) patients had ASPECTS scores of 0-4. The number of patients with an infarct core volume <70 mL was 50 (93%), while a mismatch volume of >15 mL was observed in 31 (55%) patients and 22 (41%) patients had a mismatch ratio >1.8. The assessment of CT imaging of thrombi showed 51 cases of anterior cerebral circulation, including 13 (24%) cases diagnosed as ICA, 30 (76%) cases diagnosed as MCA, and 8 (15%) cases diagnosed as SA. There were 10 cases of MCA-M1 (19%), 7 cases of MCA-M2 and MCA-M4 (13%) cases and 6 cases of MCA-M3 (11%), respectively. There were three cases of posterior cerebral circulation, comprising Posterior Cerebral artery (PCA) infarction and two cases of Basilar Artery (BA) territory infarction. Conclusions: RAPID Articial Intelligent (RAPID AI) Software Analysis combined with clinical assessment can be used in identify the size and site of cerebral infarction and diffusion perfusion mismatch in Background Stroke is the second leading cause of death and the leading cause of disability globally [1]. In recent years, the burden of stroke associated with mortality, morbidity, and disability has been increasing across the world [2,3]. Risk factors associated with stroke can be divided into groups of modi able and nonmodi able features. The group of modi able risk factors includes hypertension, smoking, diet, and physical inactivity, while the group of nonmodi able risk factors comprises age, sex, family history, and race/ethnicity [4]. Vietnam is a developing country with nearly 100,000 million people, and strokes and stroke treatment are a major health burden in Vietnam [2,5,6]. About 200,000 people have a stroke each year with a 50% mortality rate, and with only 10% of patients returning to a normal life after stroke [6]. The facilities for the diagnosis and treatment of stroke are limited. The diagnosis of stroke is currently based on clinical ndings combined with computer tomography imaging (CT) or magnetic resonance (MRI), however CT imaging or MRI facilities are not readily available and affordable in all hospitals and access to neuroradiology expertise is very limited. Phu Tho Provincial General Hospital (GHP) is the leading provincial public hospital and the largest hospital in the northwest of Vietnam. The hospital has 1500 beds and well-equipped facilities for image diagnosis with both modern CT and MRI machines. The stroke center of GHP was established in 2018, containing 150 beds for inpatient treatment of people who suffered acute stroke and who require rehabilitation after stroke. In June 2019, the hospital began to use RAPID arti cial intelligent (RAPID AI) software (iSchemaView, USA), for the imaging analysis of stroke characteristics based on an arti cial intelligence technique. This study presents the diagnostic and treatment features of stroke patients based on RAPID AI software assessment in Phu Tho Provincial General Hospital in Vietnam in 2019-2020. Study Design This study was a case series based on the data from electronic medical records of stroke patients admitted to Phu Tho Provincial General Hospital between October 2019 and October 2020, for whom Rapid AI was used to analyze images of stroke. The study was testing the practicability and value of using the technology in routine practice in Vietnam. Patients The stroke patients studied in this study had Rapid AI used for the diagnosis and analysis of cerebral images. All patients had CT or MRI scans performed within 24-48 h after the onset of stroke and were being considered for late thrombectomy. Information on the patients was collected using electronic medical records (Fig. 1). All patients gave written informed consent for participation in the study. Demographic Characteristics Demographic data collected included age, sex, work status, living area, time onset of stroke, reasons for admission, hospitalization status, treatment duration, and discharge status. Risk Factors for Stroke and Clinical Presentation Information concerning risk factors for stroke, including age, sex, hypertension, diabetes, atrial brillation, smoking, and alcohol habits, was extracted from the electronic medical record pro les. The smoking habits, alcohol consumption, and onset time were determined in the present study by a neurologist questioning patients and/or relatives who had observed the onset and who were aware of the patient's recent health and clinical characteristics on admission were recorded Laboratory Investigations Diabetes was de ned as a fasting blood glucose of greater than 7mmol/l or use of hypoglycemic medications, and hyperlipidemia as serum levels of total cholesterol of greater than 5.2 mmol/dl. Sociodemographic characteristics CT Imaging and Perfusion Imaging Analyzed Based using RAPID AI On admission, ischemic stroke was diagnosed using clinical ndings and brain imaging. The CT perfusion image characteristics were characterized in terms of ASPECTS, infarct core volume, and mismatch volume. Stroke patients were divided into four categories as follows: total anterior circulation infarcts, partial anterior circulation infarcts, posterior circulation infarcts, and lacunar infarcts. The imaging data of all patients were analyzed independently by two experienced neurologists. Lesions were divided into the following four categories: infarction of the anterior cerebral artery territory, middle cerebral artery territory, posterior cerebral artery territory, and vertebrobasilar artery territory. Cerebral imaging parameters were used to determine the locations of the lesions and then to identify infarctions in either a single vascular territory or multi-vascular territories (vascular territories ≥ 2). Statistical Analysis SPSS (version 22.0; IBM Corp., Armonk, NY, USA) was used for data analysis. Descriptive statistics were used to summarize sociodemographic and clinical information as well as subclinical data. Categorical variables were expressed as frequencies and percentages. Ethics Approval The study was approved by the Ethics Committee of Phu Tho Provincial General Hospital. The con dentiality of the information regarding patients was ensured in such a way that the data could only be used for the study purpose. The information obtained from the patients' electronic medical records is also presented only in a collective manner. Patient Characteristics In total, 54 electronic medical records of patients diagnosed with ischemic stroke were identi ed. The age range of the 54 ischemic stroke patients was from 18 to 95 years of age. The mean age of the patients was 73.39 ± 12.46 years old, and the majority of patients (85.2%) were in the age group of > 60 years old. Male patients comprised 31 (57.4%) of the total patients. The majority of the patients were retired (42.6%) and living in rural areas (64.8%). The time from onset to hospital arrival ranged from 0-6 h for 28 patients (51.9%), from 6 to 24 h for 24 patients (44.4%), and beyond 24 h for 2 patients (3.7%). Patients admitted to hospital were mostly in states of unconsciousness (50.0%) and coma (7.4%), while 42.6% of patients were conscious. The number of patients discharged from the hospital who had made some recovery was 40 (74.0%), while 7 (13%) patients became worse or died ( Table 2). CT Perfusion Images Based on RAPID AI Analysis The distribution of the ASPECTS score is given in Table 5. Only 4 (7.4%) patients had an infarct core volume greater than 70 mL compared to 50 (92.6%) patients with an infarct core volume smaller than 70 mL. The mismatch volume was greater than 15 mL in 31 (54.7%) patients, while 23 (45.3%) patients had a mismatch smaller than 15 mL. In total, 32 (59.3%) patients had a mismatch ratio under 1.8, but 22 (40.7%) patients had a mismatch ratio over 1.8 (Table 5). In our study, we observed 51 cases of anterior cerebral circulation including 13 (24.07%) cases that were due to ICA occlusion, 30 (75.93%) cases diagnosed as MCA occlusion, and 8 (14.81%) cases diagnosed as Small Artery occlusion (SA). Of these, the number of cases of MCA-M1 was 10 (18.5%), the number of cases of each MCA-M2 and MCA-M4 was 7 (13%), and the number of cases of MCA-M3 was 6 (11%), respectively. There were only three cases of posterior cerebral circulation, including one (1.9%) case of PCA and two (3.7%) cases of BA. In total, 28 stroke patients were admitted to the hospital within the rst 6 h after onset, and 24 patients were admitted within 6 to 24 h. Only two patients were admitted beyond 24 h (Table 6). Table 6. Cerebral occlusive position based on the RAPID AI analysis (n = 54). Discussion Phu Tho Provincial General Hospital was the rst hospital in to Vietnam introduce RAPID AI technology for the diagnosis and treatment of stroke patientsThis study presents the results of the diagnosis and treatment of stroke patients for whom RAPID AI combined with neuroimaging was used for the detection, characterization, and prognostication of acute strokes. AI technology is a rapidly developing eld and represents a promising avenue for fast and e cient imaging analysis [7]. RAPID AI has been approved and certi ed by the FDA for perfusion imaging and is currently used in 50 countries. RAPID AI software can be used for perfusion imaging in stroke. The RAPID AI software was validated in the DEFUSE 2 study in 2012 and received FDA approval in 2013 [8]. RAPID software analyzes CT and MRI perfusion within 3 minutes and generates colorimetric perfusion maps of stroke penumbra as well as the core mismatch volume and mismatch ratio [9]. The penumbra mismatch sensitivity is 100% and speci city is 91% [10][11][12][13]. RAPID AI technology was used in the recent large LVO ET trials EXTEND IA, SWIFT PRIME, CRISP, DEFUSE 2 and 3, and DAWN [8, 10, 14-16]. Stroke incidence increases with age and is more common in males [17]. The time from symptom onset to hospital arrival (time to hospital) is a key factor for delivering effective treatment and improved outcomes of stroke patients, especially for patients with ischemic stroke. Our results for time to hospital showed that about half of patients arrive after six hours and are therefore less likely to be eligible for reperfusion treatment. If outcomes are to improve in Vietnam the systems for prehospital care in terms of patient and population awareness of the symptoms of stroke and emergency medical services need to improve. The patients who do present are predominantly very severe stroke with over half having reduced levels of consciousness. These results were similar to those of a study in Hong Kong [22] and Shanghai [23], which showed that 56.3% and 51.9% of patients arrived at a hospital within 6 h after the rst symptom of stroke. Previous studies have presented a median arrival time that varied from 2.51 to 15 h [24]. However, studies on the time to hospital of patients in European countries reported that the majority of patients arrived within 3 h of onset, earlier than in Asian countries, which is due to the higher rate of ambulance transport of stroke patients and better stroke awareness knowledge in these countries [25]. In our study, we found that the most common risk factor was hypertension, identi ed in 75.9% of patients, followed by atrial brillation in 24.1% of patients, and diabetes in 20.4% of patients. These results align with previous studies indicating that uncontrolled hypertension is the most important risk factor for stroke in developing and developed countries [26]. This may re ect the fact that hypertension has been identi ed as the most prevalent modi able risk factor and is a powerful modi able risk factor. On the other hand, the numbers of stroke patients with smoking and alcohol habits in this study were lower (< 15%) than in other studies, [27]. Hemiparesis and dysarthria were two of the most common clinical presentations of stroke in our study, at 75.9% and 50% of the total, respectively. This is similar to cohorts reported from Egypt [28] and India [29], which showed that 76.1% of patients had hemiparesis and 60% of patients had dysarthria. Memory loss commonly occurs as a result of stroke and was observed in 27.8% of patients in our study. Other investigators have reported a dementia incidence of approximately 25% at 3 months after ischemic stroke. In the current study, a small number of patients suffered from dizziness and headache (7.4%), and unconsciousness and near death (5.6%). These results also agree with the results of El Tallawy [28]. There was a variety in the laboratory tests requested as this depends on the stroke clinicians rather than a formal protocol In our study, the laboratory tests included BP, RBC, BG, cholesterol, PC, INR, and triglyceride. These tests were examined to evaluate patients before the administration of tPA. Some of the tests are necessary to determine suitability for intravenous thrombolysis but some are often unnecessary and just delay commencement of acute treatments. [32]. The application of RAPID AI to read CT perfusion images was rst applied in Phu Tho Provincial General Hospital, Vietnam, in June 2019. RAPID AI technology processing takes within 3 minutes to send the results of ASPECTS, infarct core volume, mismatch volume, and mismatch ratio to PACS and RAPID mobile apps. This application was developed by an experienced neurologist, Greg Alber [33]. The ASPECTS is a 10-point quantitative topographic CT scan score used for patients with middle cerebral artery (MCA) stroke [34]. Our study showed the ASPECTS score was good in 44.4% of patients, compared with 31.5% showing a bad score and 24.1% showing the worst possible score. The assessment of the ASPECTS score has been used to direct therapies. Those with a low ASPECTS score, suggesting a large MCA infarction, can be excluded from futile intra-arterial treatments, which are unlikely to result in patients gaining functional independence and increase the risk of hemorrhage [34]. The infarct core volume shows the part of the AIS which was already infarcted or was irrevocably destined to infarct regardless of reperfusion [35]. The inclusion criteria used to select patients eligible for interventional thrombectomy in the case of patients admitted to hospital within or beyond 6 h onset include an initial infarct volume of < 70 mL, a ratio of ischemic penumbra to infarct core of ≥ [11,38]. The evidence of using RAPID showed the diffusion-perfusion mismatch identi ed by RAPID was in agreement with the observation of a human reader in 60 cases (95.2%) and in disagreement in 3 cases (4.8%, 3 false positives) [11]. RAPID was able to identify mismatches with 100% sensitivity and 91% speci city (false positive rate = 9.1%, false negative rate = 0%) in analyses with the observations of the human reader as a ground truth [11]. The RAPID software enabled information about the location, size, and whether there was any potentially salvageable brain enabling the physicians to make decisions on the suitable treatment, whether thrombolysis or thrombectomy [39]. The diagnostic evaluation of occlusive thrombi on noninvasive studies now constitutes an integral component of acute stroke management [39]. In a hospital with limited access to neuroradiology 24 hours a day, seven days a week, such a facility is of critical importance and has huge potential to improve the quality of clinical care in hospitals and countries with limited resources. Clearly, this needs to be combined with improving public awareness that stroke is a treatable condition if they get to hospital quickly after the onset of symptoms, and an improved emergency medical service. The current poor outcomes with a 50% mortality and only 10% making a full recovery are unacceptable. Using modern technology such as RAPID or similar arti cial intelligence systems will be important to improve outcomes. Conclusions The majority of studied patients in this study had severe stroke and presented late. While work is needed to improve prehospital care the value of arti cial intelligence software to identify which patients might still bene t from reperfusion will be important, particularly in resource poor settings to improve the quality of care and outcomes. This is the rst study to present results on the diagnostic and treatment features of stroke patients in Vietnam for whom RAPID AI was used for their screening and management. Table 2 Table 2 is not available with this version Figures Figure 1 Flow diagram of the study and selected eligible patients based on RAPID
3,964.8
2021-05-20T00:00:00.000
[ "Medicine", "Computer Science" ]
Development of Sophisticated Thinking Blending Laboratory (STB-LAB) to Improve 4C Skills for Students as Physics Teacher Candidate The 21st Century Learning is an increasingly interactive and attractive learning era. The learning process in the 21st century does not only focus on teaching and learning activities in the classroom and explaining theories. There need to be laboratory activities that help provide visuals to students, especially future physics teacher candidates. Various innovations have been made in the development of laboratory activity models. There are still many laboratory activity models that focus on one activity, namely real or virtual. Laboratory activities in the 21st century do not always have to focus on super skills or 4C skills currently in the spotlight and forget about analytical skills and the balance between LOTS and HOTS. This study aims to develop a mixed laboratory activity model that can build 4C skills focused on analytical skills and balance between LOTS-HOTS; in addition, two activities are combined into one, real and virtual. The method used in this research is in the form of Research and Development using the ADDIE model with three meetings in implementation. The results obtained in this study, namely STB-LAB, obtained good model and guide validity results. N-Gain data showed that in the control class, only creative thinking skills were compelling enough, with a value of 59.47. In contrast, in experiment class, only communication skills have an effective enough category with a value of 57.84, but other aspects have an effective category with a value > 76.00. The hypothesis test showed that using STB-LAB could improve students as physics teacher candidate 4C Skills. INTRODUCTION Learning in the 21st century is learning that has left various old and traditional ways of student learning (AACTE and P21 2010). Learning models in the 21st century have been developed with various characteristics each to adapt to all the needs educators need. Innovation can arise in various ways, such as combining innovation or modification. Even innovation can arise because creativity needs analysis (Blândul 2015). One of the many innovations developed in laboratory activities is the combination of virtual and real laboratories. Laboratory activities in education are one of the activities in the learning process needed to observe, activate, and interpret findings (Peña-Ríos et al. 2012;Gunawan, Harjono and Sahidu 2015). In e-Journal: http://doi.org/10.21009/1 addition, laboratory activities can improve students' understanding of the content rather than just theory. Sulistiowati, in her research, found that students' interest in laboratory activities was very high by showed high interest in learning and understanding after using real and virtual laboratories (Sulistiowati et al. 2013). This laboratory activity can facilitate the process of transferring knowledge from educators to students, as in Putra's (RP Putra et al. 2021) research, he found that 78.4% of the subjects he studied in semesters 2 to 6 students regarding their views on the use of virtual laboratories in learning were felt to be very necessary because apart from just theory, Virtual and actual laboratory activities are needed to describe the theory being studied. Knowing that laboratory activities are an essential additional requirement in learning, educators must choose the selection of laboratory activity models according to their needs so that students can accept all forms from educators (Nurdyansyah and Fahyuni 2016;Tayeb 2017). Previous research (Hanum 2013) revealed that the learning model, including the monotonous and less attractive laboratory activity model, can cause many shortcomings in learning and laboratory activities. Based on the results of the analysis of researchers in the model of laboratory activities carried out at the Physics Education Study Program, UIN Sunan Gunung Djati, Bandung, for the past four years, it shows that there has been no integration of the integration model of laboratory activities with virtual, sometimes educators only do one of the two types of laboratory activities. With the emergence of the Covid-19 pandemic, laboratory activities are very limited, and not all students get actual laboratory activities. The use of virtual laboratories for a long time will make students not get real skills in using laboratory equipment, as explained in research conducted by Faour (Faour et al. 2018) which states that students who use virtual laboratories for too long will not develop psychomotor skills and operational tools. Various problems arise in the laboratory activity model, one of which is the concern between real and virtual laboratories (Suryanti et al. 2019). The dilemma faced by educators is when they want laboratory activities but they are not available, but when they want to do virtual laboratory activities, there is no syntactic harmony between real and virtual laboratory activities, Previous study (Nanto et al. 2022) revealed in his research that sometimes laboratory activities are similar. For actual laboratory activities. Another study (Jaya 2012) revealed the main problem in virtual laboratory activities: the absence of alignment of work steps with real laboratory activities. The use of laboratories seems only for verification. The related study (Setya et al. 2021) revealed that in real laboratory activities, sometimes students do not know whether the results of the data obtained after the experiment are correct or not. However, in another study (Riki Purnama Putra et al. 2021) he found no difference in virtual and real laboratories' results. The data values obtained were only 0.1 to 0.2% different. A previous study shows that the e-module he made could only be used in virtual laboratory activities because the e-module used was HOT-VL based. A study about the development of laboratory activity models seeks to create two different laboratory activities with the HOT-LAB and HOT-VL models. (Malik and Setiawan 2015;Sapriadil et al. 2019). However, another research (Tayebinik andPuteh 2013) shows that blending learning should be made into one model because if there was a separation of models, educators would be confused and would repeat the same thing but on a different platform. Based on the problems and previous findings, the learning model and the laboratory activity model are deemed to have the main objective of achieving learning outcomes. Especially in the laboratory activity model must have a hierarchy so that the blending of laboratory activities is more focused and clear without having to use the same model on different platforms. Therefore, this study aims to develop a model of laboratory activities based on the sophisticated thinking blending laboratory. This laboratory activity model results from sophisticated thinking, higher order thinking laboratory, and higher order thinking virtual laboratory. METHODS This research focuses on product development using the Research and Development model and the ADDIE model, which have five stages, namely; (1) Analysis; (2) Design; (3) Development; (4) Implementation; (5) Evaluation. In particular, the flow description can be seen in FIGURE 1. In the analysis stage, activities will analyze needs such as observing the current situation and the availability of models to analyze the results of evaluating laboratory achievements in previous semesters at the Physics Education Study Program of UIN Sunan Djati Bandung. In the design stage, designing the syntax for the STB-LAB model and adapting it to the available learning theories, for development, namely the development of the STB-LAB guidebook, which then from the design and development stages each will look for validity and construct validity by using a questionnaire sheet and assessed by several lecturers which then the results are averaged to become a proportion with a mathematical equation that can be seen in EQUATION 1 (Sugiyono 2013). = × 100% (1) X is based on the priority or quality of a learning model that can be adapted to it, as seen in TABLE 1 (Sugiyono 2013). Implementation was carried out in 3 meetings, with three different materials, and holding control and experimental classes with 30 subjects in each class and carried out for two semesters or about eight months. Small and large scale tests to determine the practicality results in product assessment as indicated by the percentage value of the observation sheet for lecturers and the results of the 4C skills measured using various rubrics whose aspects can be seen in TABLE 2. The results of each skill will be hypothesized using an independent sample t-test comparing the pretest and N-Gain scores from the two classes with a design that can be seen in TABLE 3. The evaluation shows the results in the form of evaluating the results of statistical tests on the 4C skills values that have been obtained to the results of validation and practicality. RESULTS AND DISCUSSION This study aims to develop a complete laboratory activity model with a guidebook to the results that can be in the form of 4C skills using the ADDIE model R&D method. The explanation for each ADDIE stage will be discussed in stages. Analysis Models of laboratory activities in general still use conventional models such as cookbooks or inquiry-guided laboratories. A previous study revealed that the need for ICT in learning models must develop because, in the 21st century, students are required to master technology that will develop rapidly in the future. Inline, other research which analyzed learning models, including laboratory activities, that not all learning models can facilitate technological needs. So, students are perceived to only master one ability, and the abilities mastered are still considered basic abilities, not abilities needed in the 21st century (Khoerunnisa and Aqwal 2020). Other related studies had integrated two laboratory activities models, namely PjBL and STEM. They still felt he had to repeat the activities to achieve the two goals (Rochim, Prabowo and Budiyanto 2021). From various problem analyses, previous research found that the needs of the Education students of UIN Sunan Gunung Djati Bandung require integration between two types of laboratory activities, namely the combination of real and virtual. In addition to direct experience, students can also know the use of technology (RP Putra et al. 2021). Researchers took initial data to analyze the needs of educators and students for laboratory activities which can be seen in TABLE 4. A previous study on hybrid activities revealed the various shortcomings of hybrid activities with two or more models, which include; (1) Time will be wasted because there must be more prepared in making assessments; (2) There will be gaps in learning outcomes; (3) Make students confused because the learning process is changing; and (4) Undirected teaching and learning process (Raes et al., 2020). Design The design of laboratory activity model of the Sophisticated Blending Laboratory (STB-LAB) is designed with the characteristics; (1) Using constructivism learning theory; (2) Oriented to balancing e-Journal: http://doi.org/10.21009/1 LOTS with HOTS cognitive which focuses on the disposition of LOTS-HOTS by applying concepts to the material presented; (3) Provide a stimulus for higher-order thinking by utilizing the lower-level as a starting point; (4) Comparing the results between virtual and real as a benchmark for high-level analysis; (5) Using levers as a benchmark for the transition from LOTS to HOTS; (6) Setting of activities is Persuasive-Axiological; (7) Using computing and big data; (8) Real-world problems are constructed with a 1:2 truth ratio with reasons. The STB-LAB laboratory activity model has a syntax with five stages, namely; (1) Disposition Stage; (2) the argumentation stage; (3) Verification Phase; (4) Laboratory Stage; (5) Communication Stage. The disposition stage will present a problem in everyday life with three or more arguments raised on real-world problems. The arguments raised on real-world problems have a truth ratio of 1:2, intending to understand the initial concept of a material presented and be able to understand what will be done in laboratory activities later, to create imagination and curiosity of students towards answers. In addition, it will make it easier for students in the variables to be studied later. The disposition stages of the STB-LAB are based on the Gestalt and Piaget learning theory which says that learning is needed as a means to build and develop experience (Anidar 2017; Indrawati 2019). In addition, a related study presenting a stimulus to real events experienced daily will stimulate students to understand what will be learned. Students will also realize the importance of seeking information before activities (Malik et al. 2017). The argumentation stage is carried out in three activities; namely, the first activity in the argumentation stage of the STB-LAB model is an argumentation activity in which students individually determine arguments and describe hypotheses for the selected arguments. The second activity is the description of the basic theory in which students describe related theories with the arguments chosen to put forward are logistical and rational. Then the third activity is argumentation discussions, where students exchange arguments while exchanging thoughts and opinions to get to know each other and learn new things from different points of view. The argumentation stage is in line with Vygotsky's learning theory, which in learning must build awareness and the foundation for what is being said so that it becomes an idea that people can accept (Sulisworo, Ristiani and Kusumaningtyas 2019). Other research shows that students must be able to build trust in the arguments raised when they have an opinion. Arguments based on theories and concepts will be more easily accepted by people and also support opening new minds for the interlocutor (Harackiewicz and Priniski 2018). Another study revealed that if someone often has an opinion so that they can estimate people's opinions, a very rapid communication skill will be created (ES 2017). Bruner's social-constructivism learning theory (Rannikmäe, Holbrook and Soobard 2020) is also in line with the formation of communication skills in which necessary learning takes place between friends and educators, with the aim of being able to build on each other and gain new knowledge so that a broad mindset. The verification stage is a stage in the STB-LAB model where students will conduct laboratory activities virtually by conducting initial exploration, namely determining and determining the variables used in real laboratory activities. The use of virtual laboratories at the beginning of the activity is also aimed at building students' understanding of reading data, data collection, and operating tools so that when doing real laboratory activities, students will not be confused later. In addition, students can find out if the data obtained are correct or not by looking at the pattern of values obtained. A previous study (Ramadiani et al. 2022) revealed that the virtual laboratory has no difference in the data generated between the virtual laboratory and a real laboratory. The results obtained are the same, only with a difference of about 0.4% due to environmental errors or human error in taking real laboratory data. Related studies revealed that the research subjects felt less confident when doing a real laboratory and stammered (Aşıksoy and Islek 2017). Still, the results when using a virtual lab first showed that research subjects felt more confident and had no doubts when carrying out real laboratory activities. The laboratory stage is a stage in the STB-LAB model where students carry out real activities with thought and overall data collection, which will later filter the data according to individual needs to answer the hypotheses proposed statistically. Then, students process and analyze virtual laboratory data and real laboratories, which they will later compare the results between virtual laboratories and real laboratories. The collaborative ability can be seen in laboratory activities, especially when collecting data together. This can be seen from how compact they are in making decisions and their groups in work teams (Malik et al. 2021). A person's creative thinking ability can be known and measured when e-Journal: http://doi.org/10.21009/1 carrying out an activity as a team, which measures how proficient a person is in finding new steps or new breakthroughs when collecting data, thus making data collection more flexible and efficient (Khoiri et al. 2017). In addition, the analytical ability when carrying out laboratory activities (Agustian and Seery 2017) explains that analytical thinking skills can be seen when students can distinguish variables and know which data are appropriate to use. The communication stage is the final stage in the STB-LAB model where students make reports on laboratory activities which can be in the form of videos, reports, and articles which will be equipped with hypothesis test results to determine the final hypothesis as well as the initial hypothesis of what has been in the argumentation session. Relevance with John Dewey's learning theory (Williams 2017), in which learning tools must build a typical stage so that he can master and dare to speak in line or carry out tests later. In addition, the ability to think analytically will work (Seery et al. 2017). Analytical thinking skills will be seen when students can describe their findings based on statistical testing and alignment between discussions (Ghani et al. 2017). The design stage also provides expert validity results focusing on content and construct validity proportionally. This validation is carried out before conducting product trials. The results of input and suggestions from the validator can be seen in TABLE 5. Before Revision After Revision The learning theory used is not appropriate and not well described Readjusting learning theory and adding learning theory according to the stages in the syntax The sentences used in the syntax are less operational, and the flexibility of educators is not described Sentences are replaced by using operational sentences and change the flexibility of educators by combining two stages, namely disposition, and argumentation Lack of clarity on learning activities at the stage of laboratory activities Clarify the stages of laboratory activities by adding what educators and students should do There are errors in spelling and writing in foreign languages Improve spelling and writing of foreign languages Skills or achievements at the disposition stage are not clearly described, and there is a lack of references at the disposition stage Add and hierarchical references to skills or achievements that students will obtain After carrying out the syntax revision stage from the expert validation results as in TABLE 5, it is then re-assessed by the content and construct validity date validators which can be seen in TABLE 6. The results from TABLE 6, the assessment of the STB-LAB model, show that the final result of validity gets a value of 90%, and construct validity gets a value of 95%, which shows that both validity states are valid or feasible for testing to students. Development The development carried out is the development of the STB-LAB model with the guidebook-the STB-LAB laboratory activity model guide book it can be seen in Implementation This STB-LAB laboratory activity model is implemented for the subject. The subjects in the control class were 30 students in the early semester of the Physics Education Study Program at UIN Sunan Gunung Djati Bandung. Then the experimental subjects were carried out on the second-semester students of the Physics Education Study Program at UIN Sunan Gunung Djati Bandung; 30 people have collected again. The laboratory activity model in the control class is laboratory-guided inquiry, while the experimental class performs maintenance using the STB-LAB model. Initial results at the implementation stage can be seen in TABLE 9 for the control class N-Gain and TABLE 10 for the experimental class N-Gain and graphs, which can be seen in FIGURE 2 for the control class, and FIGURE 3 for the experimental class. The results of the N-Gain in FIGURE 2 and TABLE 9 show that the effectiveness in the control class that gets the sufficient category is in the aspect of creative thinking skills. In contrast, the other three aspects get the category below are quite effective. Meanwhile, in the control class shown in the results of the N-Gain FIGURE 3 and TABLE 10, it shows that the effectiveness of the experimental class is effective in three skills; (1) Critical Thinking Skills; (2); Creative Thinking Skills; and (3) Collaborative Skills. FIGURE 3 and TABLE 10 show that using the laboratory activity model can improve 4C Skills in the N-Gain category. Evaluation The final stage of developing the STB-LAB model is to determine the statistical improvement of 4C Skills starting from normality and homogeneity, then an independent sample t-test for each aspect. The normality and homogeneity test results in each aspect can be seen in Normality and homogeneity tests are the main requirements in the paired sample t-test which will later become a reference in determining the hypothesis of 4C skills improvement. Based on TABLE 11, the critical thinking skills aspect results showed that the normality of the pretest control class got a result of 0.200 with = 0.050, then sig. > data is normally distributed, while the experimental class shows the result of 0.112, then sig. > data is normally distributed, the homogeneity shows a value of 0.311 which means sig. > homogeneous data. While the N-Gain shows that the control class has a normality of 0.200 and the experimental class of 0.063, it shows that the N-Gain normality in both classes is normally distributed with sig. >, the homogeneity shows a value of 0.071 which means sig. < homogeneous data. Based on TABLE 11, the creative thinking skills aspect shows that the normality of the pretest control class gets 0.112 results with = 0.050, then sig. > data is normally distributed, while in the experimental class, the results are 0.055, so sig. > data is normally distributed. The homogeneity shows a value of 0.955 which means sig. > homogeneous data. While the N-Gain shows that the control class has a normality of 0.200 and the experimental class of 0.119, it shows that the N-Gain normality in both classes is normally distributed with sig. >, the homogeneity shows a value of 0.957 which means sig. < homogeneous data. Based on TABLE 11, the aspect of communication skills shows that the normality of the pretest control class gets a result of 0.066 with = 0.050, then sig. > the data is normally distributed, while the experimental class shows the result of 0.101 then sig. > data is normally distributed. The homogeneity shows a value of 0.085 which means sig. > homogeneous data. While the N-Gain shows that the control class has a normality of 0.200 and the experimental class of 0.136, it shows that the N-Gain normality in both classes is normally distributed with sig. >, the homogeneity shows a value of 0.932 which means sig. < homogeneous data. Based on TABLE 11, the collaborative skills aspect shows that the pretest normality of the control class gets a result of 0.083 with = 0.050, then sig. > data is normally distributed, while in the experimental class, the results are 0.077, so sig. > data is normally distributed, the homogeneity shows a value of 0.486 which means sig. > homogeneous data. While the N-Gain shows that the control class has a normality of 0.150 and the experimental class of 0.200, it shows that the N-Gain normality in both classes is normally distributed with sig. >, the homogeneity shows a value of 0.901 which means sig. < homogeneous data. The results of normality and homogeneity in both classes of each aspect do not show abnormal or non-homogeneous data, the next step is to find the significant value in the paired sample t-test, the results of which can be seen in TABLE 12 shows that the statistical analysis of paired sample t-test on 4C skills shows that in the control class, only creative thinking skills get sig. < α, while in the experimental class, none of the values obtained sig. > α. Based on the hypothesis H0 = there is an increase in ability, and Ha = there is no increase in ability. The hypothetical decisions taken in the control class from CONCLUSION Based on the results of the development and statistical results found, it can be concluded that the Sophisticated Thinking Blending Laboratory (STB-LAB) laboratory activity model has been successfully developed by showing the results of content validation of 90% and construct validity of 95%, the results of both validations on the STB-LAB model shows good category. In the STB-LAB guidebook, the content validation results are 85%, and construct validity is 75%, the results of the two validations in the preparation of the STB-LAB guidebook show a good category. In general, the STB-LAB model consists of 5 stages, namely; (1) Stages of disposition; (2) Stages of Argument; (3) Verification stage; (4) Laboratory Stages; (5) Communication stages. The results of the 4C skills evaluation show that in the control class N-Gain test, only aspects of creative thinking skills are categorized as quite effective, while the other three aspects get results below. Meanwhile, the results of the N-Gain test in the experimental class showed that only communication skills were categorized as quite effective. At the same time, the other three aspects got the above results. The results of the paired sample t-test show that in the control class, only creative thinking skills can improve, while in the experimental class, all 4C skills can improve. It is hoped that in further research, the use of STB-LAB in this laboratory activity will focus on improving 4C skills and other abilities. Practitioners are able to determine facts, opinions to identify and summarize the main ideas that will be conveyed accurately and in their entirety 3 Practitioners are able to determine facts, opinions to identify and summarize the main ideas that will be conveyed accurately but not in its entirety 2 Practitioners are able to determine facts, opinions to identify and summarize the main ideas that will be conveyed but are not correct at all 1 The practitioner is not able to determine facts, opinions to identify and summarize the main ideas that will be conveyed Reading 4 Practitioners deliver presentations not by reading the text in its entirety and not by displaying full-text and not haltingly 3 Practitioners deliver presentations not by reading the text in its entirety and not by full-text display but sometimes stuttering 2 Practitioners deliver presentations by reading the text in its entirety but not in full-text display 1 Practitioners deliver presentations by reading the text in its entirety with a full-text display Listening 4 The practitioner listens to directions from the instructor and can identify facts in a message/information 3 The practitioner listens to directions from the instructor but there is a miscommunication in identifying facts in a message/information 2 The practitioner listens to the instructions from the instructor but is unable to identify the facts in a message/information. 1 Practitioners do not listen to directions from the instructor so they cannot identify facts in a message/information Understanding Understand the purpose/purpose of communication 4 The practitioner is able to translate messages properly and correctly 3 The practitioner is able to translate the message well but it is not complete 2 The practitioner is able to understand the main idea of a message but needs help in translating it.
6,185.2
2022-06-30T00:00:00.000
[ "Physics", "Education" ]
Impact of high and low vorticity turbulence on cloud environment mixing and cloud microphysics processes Turbulent mixing of dry air affects evolution of cloud droplet size spectrum through various mechanisms. In a turbulent cloud, high and low vorticity regions coexist and inertial clustering of cloud droplets can occur in a low vorticity region. The non-uniformity in spatial distribution of size and number of droplets, variable vertical velocity in vortical turbulent structures, and dilution by entrainment/mixing may result in spatial supersaturation variability, which affects the evolution of the cloud droplet size spectrum by condensation and evaporation processes. To untangle the processes involved in mix5 ing phenomena, a direct numerical simulation (DNS) of turbulent-mixing followed by droplet evaporation/condensation in a submeter-cubed-sized domain with a large number of droplets is performed in this study. Analysis focused on the thermodynamic and microphysical characteristics of the droplets and flow in high and low vorticity regions. The impact of vorticity production in turbulent flows on mixing and cloud microphysics is illustrated. observation and numerical models being applied to investigate this multiscale problem. Movement and position of droplets are controlled by turbulent eddies of varying sizes. Simultaneously, evaporation or condensation of droplets incurs changes in the local environment (∼ scale of droplet itself) through latent heat exchange. Buoyancy generated by phase change (when a bunch of droplets evaporate or condense instead of a few) may impact cloud-scale motions. A quantity called particle response time 'τ p ' determines how quickly droplets respond to the changes in the surrounding fluid motion. Some droplets are tiny and 40 precisely follow the flow trajectory, while larger ones can modify the flow. Thus, droplet-turbulence interaction is a multi-scale process and non-local in nature. There are several macrophysical and microphysical implications of droplet and turbulence interactions. Droplets in the decaying part of a cloud may be transported by turbulence to the more active regions of the cloud and undergo further growth (Jonas, 1991). The inhomogeneous mixing model used by Cooper et al. (1986) and Cooper (1989) shows that when a parcel 45 of air undergoes successive entrainment events, each of which reduces the droplet concentration, enhanced growth is possible. However, Jonas (1996) argues that ascent leading to droplet growth may activate some entrained nuclei, limiting the maximum supersaturation achieved which, in turn limits the growth rate. Vaillancourt et al. (1997) gives further insight into the nature of turbulent entrainment at the cloud edges. They argue that the interaction between the ambient and the cloudy air is not the same everywhere rather there is some prominent regions of entrainment with vortex circulations. 50 The study of microphysical droplet turbulence-interaction has gained momentum in the recent years due to advances in computational capabilities. Several possibilities like turbulence induced supersaturation fluctuation and enhanced collision rates have been investigated. Some studies (Chen et al., 2016;Franklin et al., 2005;Pinsky et al., 2000;Riemer and Wexler, 2005;Shaw, 2003;Vaillancourt and Yau, 2000) indicate an enhanced collision rate in a turbulent environment. Shaw et al. (1998) performed DNS and found that preferential clustering of droplets in the low vorticity regions in a cloud gives rise to spatially 55 varying supersaturation. Droplets in the high vorticity regions experience enhanced supersaturation and grow faster. However, the comments of Grabowski and Vaillancourt (1999) on the results of Shaw et al. (1998) pointed several shortcomings, in particular, absence of droplet sedimentation, assumption of high volume fraction of vortex tubes (50%) and strong dependence on the vortex lifetime. There is no clear theory regarding vorticity characteristics in three-dimensional homogeneous turbulent flows, despite in-60 creasing research on turbulence. Vorticity has a profound impact on the spatial distribution of droplets. Due to preferential clustering, a relatively less number of droplets are left in the high vorticity regions. The difference in the spatial distribution of droplets induces supersaturation fluctuations. A low number of droplets competing for the available vapour field in the high vorticity regions, should experience enhanced growth rates and for this to happen, droplets should stay there for duration enough for supersaturation field to act. However, little is known about the length and lifetime that the high vorticity regions 65 occupy (Grabowski and Wang, 2013). Due to these limitations, the effect of preferential clustering on the diffusional growth is poorly understood. In this study, we examine the diffusional growth and evaporation of cloud droplets in an entrainment and mixing simulation setup of DNS. We compared the droplet characteristics such as spectral width, volume mean radius, number concentration, probability density function of droplet radii and supersaturation in high and low vorticity regions. As reported by Vaillancourt 70 et al. (1997), the main entrainment sites and mixing zones were located in the vortex circulations areas. Similar to Vaillancourt et al. (1997), we aims to look for, using DNS, locations with vortex circulations in the main entrainment sites and mixing zones. The organisation of the paper is as follows. The next section provides details of methods employed and data used. Results and discussions are provided in section 3 with further four subsections containing discussion of flow and droplet characteristics in low and high vorticity regions. In the last part, we concluded our analysis. Data and Methods We carried out a Direct Numerical Simulation (DNS) following the setup of Kumar et al. (2014Kumar et al. ( , 2018 to simulate the entrainment and mixing mechanisms at cloud edges. This DNS code uses the Euler-Lagrangian frame, solves flow equations at each grid point, and track each droplet inside a grid by integrating equations for their position, velocities, and growth rate. The simulation produces output in two formats, one is from the Eulerian frame in NetCDF format developed by UCAR/Unidata 80 and the droplet dynamics output is saved in SION format (SIONLib, 2020). The simulation domain was chosen (51.2 cm) 3 with 1 mm grid resolution, thus containing a total (512) 3 grid points in the domain. An initial setup of computational domain is presented by the Figure 1(a). Four simulation setups were considered for this study. Two relative humidity (RH) set up (85% and 22%) cases with both mono-dispersed and poly-dispersed droplet size distribution initialized to DNS. The mono-dispersed case uses a single droplet 85 size of 20 µm (an idealistic case), whereas the poly-dispersed set up uses droplet size distribution (size range 2-18 µm) from cloud observation (CAIPEEX experiment: https://www.tropmet.res.in/ caipeex/ ) similar to earlier simulations as used in Kumar et al. (2017). Two humidity cases correspond to dry (RH=22%) and moist environmental (RH=85%) airs were taken (from observations of the monsoon environment) with a cloudy slab in the DNS domain to simulate the mixing processes. vorticity magnitudes were first calculated using Eulerian data at each grid point containing the velocity components in X, Y & Z directions. The next step is to find the high and low vorticity region in the DNS computational domain, requiring calculating and visualizing actual vortices generated by turbulence flow. Since the grid size is 1 mm, it is unfeasible to get a vortex inside a single grid box; instead, an area of the vortex must be sought out, containing multiple grid boxes. It is challenging task to locate a small box to cover a minimal portion of the low vorticity area. We used an unsupervised machine learning (ML) algorithm 95 mentioned in the next subsection to address this problem. Locating High Vorticity Regions To locate high vorticity regions in the domain, we used the k-means clustering algorithm from Scikit-Learn python package (Pedregosa, 2011). The k-means clustering (Bock, 2007) is one of the most popular and the simplest unsupervised machine learning algorithm. It makes 'k' groups or clusters from a dataset based on the Euclidian distance between individual data 100 points. However, k-means clustering cannot guess the optimum number of clusters for a particular dataset; instead, the user has to assign it. Therefore, selecting the number of groups or clusters in which a dataset has to be grouped or clustered is crucial. This algorithm was used to locate high vorticity region from the vorticity data. The absolute vorticity ω related to velocity component is calculated as (see Chapter 4.2 in Holton and Hakim (2013)) The values of ω in our DNS data range from 0 − 200 s −1 as seen in panel (a) of Figure 1. Vaillancourt and Yau (2000) documented that only small fraction is occupied by high vorticity regions and no preferential concentration of cloud droplet was observed in cloud core. We also found that less than 2% of grids (by volume) having vorticity magnitude 60s −1 while for 110 bigger magnitude even less number (almost negligible ) of grids were located. Based on these findings, in this work, a threshold value of vorticity magnitude, 60s −1 was chosen as high vorticity criteria. We have investigated the droplets characteristics taking 50s −1 as threshold but it did not make any difference in the trends. Considering 30s −1 as a threshold for high vorticity, which is less than 1/5 th of the maximum vorticity magnitude (200s −1 ) does not seem to be justified. Figure 1(b) depicts the fraction of grid points occupied for different threshold values of vorticity magnitude. 115 Once the threshold value for vorticity magnitude is decided, the next step is to locate 3D boxes enclosing the high vorticity regions which is accomplished by k-means clustering. Wherein, two input variables have to be assigned a value; (i) number of cluster ('k') and (ii) maximum number of iterations. Since, vortexes are having tabular or sheet type structure, it is possible that a 3D box may contains many low vortex regions for a typical value of 'k'. Hence, an optimal value of 'k' is required for Set up Shaw et al. (1998) This study Entrainment-Mixing No Yes Vortex Lifetime 2-3 orders of magnitude > Kolmogorov timescale (≈10 seconds) Less than 1 second Volume fraction of high vorticity ≈ 50% less than 2% Table 1. Comparison of findings in this study and in Shaw et al. (1998). Kolmogorov time scales in natural clouds are in the range of 0.01 → 0.1 seconds. Kolmogorov time scale for DNS is 0.0674 seconds, which lies in the said range. choosing small enough 3D boxes to avoid the low magnitude vortexes. We identified the value of 'k' to be used in the algorithm 120 by conducting several experiments and selected optimal value k=3500 based on the chosen threshold value for high vorticity. This finding of a tiny fraction for high vorticity regions in a cloud core is significant because it is completely different from the results documented in Shaw et al. (1998). They hypothesizes that preferential concentration ( inertial clustering) occurs at 130 small spatial scales and low (high) particle concentration corresponds to high (low) vorticity regions. Furthermore, they used Rankine vortex model and did not calculate vorticity from the velocity field of the DNS like we did in this study. A comparison of methods in this work and study done by Shaw et al. (1998) is provided in the table 1. Results and discussion In this section, we discuss about various analyses from the two humidity set up cases with initial poly-dispersed size distribution 135 is considered. Turbulence characteristics at the edges and core of cloud The interface between cloud volume and the sub-saturated air is distinguishable during the early evolution of the flow. To see if any features of the flow exhibit distinct properties at the edges, three separate volumes from the entire domain have been picked up. The cloudy slab area lies between 142mm to 372mm along x-axis, and the rest is occupied by the sub-saturated 140 air. Of the two interface volumes, one is on the left side (between x = 70 mm to 140 mm), and the other is on the right side (x = 364 mm to 434 mm). The volume lies between x = 182 mm and 322 mm is for the core region, as depicted in Figure 2. ability of more kinetic energy at the cloud edges makes them the hotspots of vorticity generation. Initially, a higher value of mean vorticity is observed at the edges in both simulation cases, as evident from the lower panel of Figure 3. Near the edges, a strong gradient of mixing ratio and temperature exists, which leads to a turbulent mixing process. TKE is generated at the interface of cloudy slab and dry air due to negative buoyancy production by droplet evaporation. This energy is transported to cloud slab with progressive time by the vortices ('eddies') that propagate inside. However, there are periodic changes in 150 the TKE variations between the interface and the cloud slab possibly due to periodic boundary condition of the DNS setup. Notable feature between the two RH simulations (22% and 85%) is higher difference between the interface and the cloud slab as it undergoes stronger droplet evaporation in the drier case (RH=22%). Flow characteristics in high and low vorticity regions The previous subsection presented the variation in flow characteristics at cloud core and edges. Another part of this study is to 155 investigate these characteristics in high vortex regions of the turbulent flow. We considered the high vorticity (HV) area having a vorticity magnitude greater than 60 s −1 . Similarly, points with vorticity of less than 30 s −1 were classified as regions of low vorticity (LV). We investigated the evolution of the mixing ratio (qv) in both drier (RH=22%) and humid cases (RH=85%). The incursion of drier air results in a lower mixing ratio at the edges. In both HV and LV regions, apart from the mixing ratio, we also determined 160 the root mean square velocity u rms . For dry and humid cases, the u rms is found to be higher in HV regions. We examined the droplet features for RH=22% and RH=85% after investigating the flow properties described in the next subsection. Droplet Characteristics One of the main aims of this study is to study various droplet characteristics such as number concentration, volume mean radius, spectral width, and mixing process in HV and LV regions. Number concentration and mean volume radius There have been many kinds of research on the distribution of droplets in a turbulent flow field. Several laboratory studies (Lian et al. , 2013) and model simulations (Shaw et al., 1998;Ayala et al., 2008) reported the process of preferential clustering of cloud droplets in low vorticity region. The preferential clustering means that droplets prefer to cluster in some specific flow regions rather than randomly distributed everywhere. A high amount of rotation characterizes the highly vortical part of a fluid. 170 When a droplet enters this region, it flung out due to its inertia and accumulated in a low vorticity region. This process leads to heterogeneous droplet concentration in space, an important aspect that affects droplet growth rate and size distribution. In a poly-dispersed droplet size distribution, the larger droplets are more prone to be affected by the vorticity compared to smaller droplets that may follow the flow streamline due to low inertia. For this reason larger droplets accumulate in low vorticity region and results in larger mean volume radius. 175 On the upper panel of Figure 3, both humidity cases show almost the same trend, i.e., a higher number concentration in the low vorticity region due to inertial clustering. Lower panel of Fig. 3 shows variation of mean volume radius in high and low vorticity regions. Arid like condition in the case with RH=22% leads to quick evaporation of droplets and indicated by rapid decay in droplet number concentration and mean volume radius. Consequently, the number concentration curves almost merge after 7.5 seconds. Due to preferential clustering, the high vorticity regions have a relatively small number of droplets. It is to 180 be noted that low vorticity region always has larger mean volume radius during the simulations as shown in the lower panel of There may be two possible reasons for smaller value of mean radius in high vorticity regions: (i) droplets experience a drier environment and getting more evaporation during early evolution of mixing when high vorticity forms at cloud edge, and (ii) larger droplets flung out of the high vorticity region more easily as a consequence of greater inertia effect, leaving behind only 185 the smaller ones, i.e., preferential clustering is more prominent for larger droplets. The second possibility is more valid during later part of simulation (approx. after 5 sec) when high vorticity forms inside the cloud slab. To clarify whether preferential clustering alone decides the volume mean radius distribution or any other mechanism that may be responsible, we investigated droplet spectra, the trends of mean supersaturations, and evolution of droplet size distribution. 190 The variation of the spectral width is presented in Figure 5, showing an entirely different picture of the evolution of droplet spectra in two cases. We have initialized the DNS with a poly-dispersed DSD spectra having a spectral width of nearly 2.2 µm, which is observed in monsoon cumulus clouds over India (Bera et al., 2016). During the mixing of cloud slab with the environmental dry air, the spectral width of DSD increases rapidly initially from 5-7 seconds and decreases thereafter for RH22% case. Whereas in the RH85% case, a gradual increase can be seen for initial 195 10 seconds and remains almost constant after that. One of the most important results of this study is that spectral width of DSD is different in high and low vorticity regions. For RH22% case, spectral width is higher for droplets situated in high vorticity region during initial 5 seconds of mixing when spectral width increases rapidly. But opposite scenario occurs after 7.5 seconds i.e., smaller spectral width in high vorticity region. Nevertheless, spectral width always remains higher in high vorticity region in RH85% case. The initial growth and later decay of spectral width during mixing for RH22% case is associated with the modification of spectral shape by droplet evaporation and number concentration dilution (see Bera (2021)). In this case, evaporation is very significant due to much drier environmental air mixing. When evaporation starts, smaller droplets of the DSD evaporate faster compared to larger droplets governed by the inverse relation of growth rate with droplet size (Rogers and Yau, 1996). As a result, spectra broaden towards the smaller size tail as shown in upper panel of Fig. 6. This is the reason of increasing spectral 205 width for initial 5 seconds of RH22% case and entire 20 seconds of RH85% case. However, when evaporation is so sufficient that smaller size tail of spectra is evaporated completely and larger droplets only remain to evaporate, the spectra start shrinking and spectral width decreases (as shown in Fig. 6c). This is the situation for RH22% during mixing after 7.5 seconds and this situation does not occur for RH85% case where evaporation is not sufficient due to moist air mixing (as shown in Fig. 6d). The difference of spectral width in HV and LV region can be explained with consideration of higher droplet evaporation in 210 high vorticity region. Initially, high vorticity forms at cloud edges where dry air mixing occurs, leading to faster evaporation of droplets. The second possibility is that high vorticity regions are pockets of rotating air motions that can easily transport the vapour mass (produced by droplet evaporation) out of the region and thereby facilitates enhanced evaporation. These two plausible reasons result in higher evaporation rate in regions of high vorticity and consequent impact of droplet spectral width. The differences in the PDFs of HV and LV regions can be noticed very well at 3 and 17.8 seconds. During this time, a greater 215 spectral width exists in the high vorticity region (refer to figure 4) for both humidity cases (i.e., RH=22% and RH=85%).PDFs confirm that high and low vorticity regions contain almost the same maximum and minimum drop sizes, but the difference comes from the distribution. So, the following possibilities arise based on what PDFs are depicting: • Supersaturation is lower in high vorticity regions, which triggers enhanced evaporation. • Due to enhanced evaporation, there are smaller droplets in the vortices as depicted by the size distribution. 220 • Bigger droplets are more vulnerable to be thrown out of the vortices, leaving only the smaller ones. For better understanding, we also analysed the trend of the droplet supersaturation. Figure 7 depicts the mean and standard deviation of supersaturation variation in HV and LV vorticity regions for RH=22% and 85% cases. The droplets in high vorticity regions experience comparatively lower supersaturation until around 6 seconds, 225 after which the difference tends to vanish. Hence, during entrainment of drier air into the cloudy volume, droplets encounter a more sub-saturated environment in the highly vortical regions, and it is the lower supersaturation values that produce a larger standard deviation. In RH22% case, the supersaturation drops to -15%, while in RH85% case, although a similar pattern exists, the supersaturation drops only up to -3%. Degree of mixing 230 One of the best metrics to investigate the entrainment and mixing process is the degree of mixing, which depends on the mixing diagram and has wider application in numerical models (Lehmann et al., 2009;Kumar et al., 2017Kumar et al., , 2018. We have analysed Figure 7. Upper panel is for the evolution of mean supersaturation in high and low vorticity regions. Supersaturation is lower in the high vorticity regions during initial 5-7 seconds and its value reaches a minimum of -15% in the RH22% case, while it drops only upto -3% in the RH85% case. Evolution of the standard deviation of supersaturation in high and low vorticity regions is shown in lower panel. The standard deviation sees a steeper increase and decrease in high vorticity regions, and has a large variation (0-14) in RH22% case, while the variation is small (0-2.8) in the RH85% case. the mixing diagrams and degree of mixing in the high and low vorticity regions for both moist and dry cases. Variations in mixing diagrams in high and low vorticity regions for RH22% case are depicted in the panels (a) and (b) respectively of Figure 7. The panel (c) shows the evolution of the degree of homogeneous mixing and a comparison of Damkohler numbers for all 235 four cases is presented in the panel (d). The Damkohler number also measures the degree of mixing as a quantity related to two time scales, namely, fluid time scale (τ f luid = L/U rms ) and phase relaxation time scale (τ phase = 1 4πn d Dr ) (Kumar et al., 2012), where, U rms is root-mean-square of the turbulent velocity fluctuation, L is a characteristic large (energy injection) scale of the flow, n d is droplet number density, 'D' is modified diffusivity, and 'r' denotes initial volume mean radius. 240 The Damkohler number, Da = τ f luid /τ phase , represents an estimation of the mixing scenario. Da >> 1 indicates an inhomogeneous process, while Da << 1 represents a homogeneous one (Latham and Reed, 1997). In the panel (d) of Figure 8, the evolution of the Damkohler number has been shown. Low vorticity regions always have a bigger Da than high vorticity one. A value closer to 0 indicates a higher degree of homogeneous mixing. Like the mixing diagrams, the Damkohler number also suggests a greater homogeneous mixing in the high vorticity regions. High vorticity (i.e., circulations of fluid) indeed helps to 245 promote faster mixing and produce a well mixed homogeneous cloud volume. Mono-disperse case In the previous sections, we have discussed the simulations considering a poly-dispersed spectrum of cloud droplet size distribution based on observation. Here, we have conducted additional simulations with a mono-dispersed droplet size (20 micron) to investigate the effect of vorticity on same sized droplets. The simulation data analysis obtained from the mono-dispersed 250 case yielded quite similar results to the poly-dispersed case. The findings have been summarized in table 2. Due to droplet evaporation associated with entrainment-mixing and condensation growth in slab region, the mono-dispersed spectrum widens and generates various droplet sizes. Higher droplet number concentration and greater mean volume radius is found in low vorticity region, indicative of inertial clustering of larger droplets. Various other properties are also found similar to that of poly-dispersed droplet simulation case as can be noted in table 2. Conclusions Droplet characteristics in high vorticity (HV) and low vorticity (LV) region in a three dimensional DNS of cumulus cloud have been studied. We have taken two initial drop size distributions, one from the CAIPEEX observation (poly-dispersed) (Bera et al., 2016) and the other one is a mono-dispersed size distribution, and performed entrainment simulation with two initial 260 relative humidity values viz. 22% and 85% for the ambient air that mixes with cloud slab. Characteristics Observations in the case with RH=22% Observations in the case with RH=85% Mixing ratio Lower mixing ratio in the high vorticity regions during initial 10 seconds. Lower mixing ratio in the high vorticity regions during initial 10 seconds. RMS velocity Higher root mean square velocity in the high vorticity regions. Higher root mean square velocity in the high vorticity regions. Number Concentration The number concentration is always higher in the low vorticity regions. The number concentration is always higher in the low vorticity regions. Volume mean radius The volume mean radius is always greater in the low vorticity regions. The volume mean radius is always greater in the low vorticity regions. Spectral width A greater spectral width in high vorticity during initial 5-7 seconds and an unclear trend after that due to the droplets' fast evaporation. The spectral width is always greater in the high vorticity regions. Mean supersaturation Lower supersaturation in high vorticity regions during initial 5-7 seconds. Supersaturation drops to a minimum value below -15%. The standard deviation of droplet supersaturation The greater standard deviation of droplet supersaturation in the high vorticity regions during the initial 2-3 seconds and a reverse trend from 3-7 seconds. No difference thereafter. The greater standard deviation of droplet supersaturation in the high vorticity regions during the initial 2-3 seconds and a reverse trend from 3-7 seconds. No difference thereafter. Mixing Mixing scenarios estimated through both methods, e.g., mixing diagram and Damkohler number, indicate a more homogeneous mixing in the high vorticity regions. A higher degree of homogeneous mixing in this case also. A DNS model setup similar to Kumar et al. (2014) has been considered. This setup has a cloudy volume and surrounding sub-saturated air, which are allowed to mix as the entrainment simulation kicks in. During the entrainment and mixing process, the flow in the domain develops spatially varying characteristics. The magnitude of turbulence (decaying with time) is not the same everywhere. Some regions are highly turbulent and possess a high value of vorticity. The vortices may influence 265 the distribution and growth of cloud droplets. To study the dependency of droplet characteristics on vorticity, we located HV regions in the computational domain. Finding HV regions is challenging because the shape, size, and position of vortices change within a fraction of a second. We have applied an unsupervised machine learning algorithm, k-means clustering, to categorize the high and low vorticity clusters. In our knowledge, it is the first time to use the machine learning algorithm for investigating cloud turbulence properties. We answered the following scientific questions in this study 270 (i) How much volume fraction of intense vorticity occupies in the domain? (ii) Where is the cloud-clear air interaction most prominent; in highly turbulent regions or weakly turbulent regions? (iii) Is preferential clustering same for all size of droplets? How do the spectral properties of droplets vary in high and low vorticity regions? (iv) Does the relative humidity of the ambient air have any impact on the evolution of droplet size spectra? 275 (v) What is the maximum characteristics, i.e., homogeniety of mixing degree in high vorticity and low vorticity regions? Entrainment and mixing is a turbulent process, and during the initial few seconds, the cloud edges, where a large gradient of water vapor field exists, are the most turbulent. More robust KE fluctuations were found at cloud edges, making them hotspots for vorticity generation. A distinct difference in the KE fluctuations was noted between two RH simulations (22% and 85%), a bigger difference was observed in the drier (RH=22%) case. Turbulent velocity urms was found higher in HV regions for both 280 simulation cases. Droplets tend to cluster in the LV region with smaller droplets showing less tendency for the same, which may lead to heterogeneous number concentration in space and time, consequently affecting the droplet size distribution. Clustering of larger droplets in the LV region resulted in a higher mean volume radii over there. The most important result from this study is the different spectral widths (σ) in the HV and LV regions. In the drier case, a higher value of σ occurred in HV region 285 during the first 5 seconds, and after that opposite scenario was observed. This opposite behavior can be connected to droplet evaporation and dilution of number concentration in the HV region. The spectral width always remains higher in the HV area for the moist case (RH=85%), it may be because of higher droplet evaporation influenced by the presence of rotating air pockets, helping to transport the vapor mass out of the HV region. The intrusion of subsaturated air is the most prominent in the high vorticity regions which reflects in the evolution of the 290 droplet supersaturation. Enhanced evaporation produces a wider droplet and supersaturation spectra. The time series of the droplet number concentration and volume mean radius can be used to get mixing diagrams of high and low vorticity regions. The degree of mixing calculated based on the mixing diagram shows more mixing homogeneity in the HV regions. The Damkohler number which depends upon fluid time scale and phase time scales also indicate a higher degree of homogeneous mixing in HV regions. Similar observations were found for the mono-disperse case also. We emphasize that our findings are 295 strongly affected by entrainment and mixing of drier air at the cloud edges. The results may differ in adiabatic cloud cores where entrainment and mixing are absent. Data availability. The DNS output data used in this study is archived in HPC Aaditya at IITM pune and can be made available on request. Author contributions. BK and MKY formulated the concept of this work. RR ran simulation and produces results. SB did analysis and contributed in the manuscript preparation. SAR contributed in the manuscript preparation.
7,095.2
2021-01-01T00:00:00.000
[ "Environmental Science", "Physics" ]
Apical size and deltaA expression predict adult neural stem cell decisions along lineage progression The maintenance of neural stem cells (NSCs) in the adult brain depends on their activation frequency and division mode. Using long-term intravital imaging of NSCs in the zebrafish adult telencephalon, we reveal that apical surface area and expression of the Notch ligand DeltaA predict these NSC decisions. deltaA-negative NSCs constitute a bona fide self-renewing NSC pool and systematically engage in asymmetric divisions generating a self-renewing deltaAneg daughter, which regains the size and behavior of its mother, and a neurogenic deltaApos daughter, eventually engaged in neuronal production following further quiescence-division phases. Pharmacological and genetic manipulations of Notch, DeltaA, and apical size further show that the prediction of activation frequency by apical size and the asymmetric divisions of deltaAneg NSCs are functionally independent of Notch. These results provide dynamic qualitative and quantitative readouts of NSC lineage progression in vivo and support a hierarchical organization of NSCs in differently fated subpopulations. The PDF file includes: Figs. S1 to S6 Table S1 Legends for movies S1 and S2 Other Supplementary Material for this manuscript includes the following: Movies S1 and S2 Supplementary Fig. S1 (related to Fig. 1).Apical area and deltaA expression by cell type and state. S1A. Boxplots showing the distribution of cell anisotropies, perimeters, AAs and numbers of neighbors as a function of cell types and states (qNSCs, aNSCs and aNPs).n=4 independent hemispheres, Dm region, same samples as in Fig. 1G. Figure S2 S2B. 2D projection methods.Four different projection methods were tested on the timelapses to project in 2D the ZO1-mKate2 signal: Maximum projection and local Z projector (both available on Fiji), preMosa (available source code on the preMosa GitHub) and CARE (python code available on the CSBdeep GitHub).Zoom on a very low-resolution area in the image to compare the four projection methods.We selected CARE as the best method to resolve the ZO1 staining and improve the segmentation afterwards. S2C. Closer look at deltaA:GFP expression intensities by color-coding eGFP intensity (FIRE lookup tables).Manual correction of the deltaA signal is necessary to ensure a correct assignment of deltaA expression to individual NSCs, because the apical surface and the corresponding underlying cell cytoplasm, which is GFP-positive, are not always in perfect register.Thus, many NSCs negative for deltaA are wrongly classified as positive if neighboring a balloon-shaped deltaA pos cell.Green arrows show examples where the GFP signal from one cell invades a neighbouring AA in 2D. S2D. Manual scoring of the deltaA:GFP signal to add quantitative information to the segmentation of each cell, based on the fire LUTs and using visual and temporal criteria (see Materials & Methods).Examples of NSCs with no deltaA (0), weak deltaA (1), medium deltaA (2), and strong deltaA (3) levels.For analyses using quantitative deltaA values (Figs.S3C, S4B, S5A), deltaA expression was calculated using the manual segmentation.The value of deltaA:GFP expression for NSCs classified as "no deltaA" is set at 0 and the values for other NSCs are calculated as the sum of the pixel intensity normalized by the apical area. S2E.Comparison of the exact quantitation of deltaA:GFP expression (normalized by the area of each cell, see 2D) with manually assessed intensity scores for all cells in one time-lapse. S2F.Alignment of live and fixed images.We are able to trace back previously live-imaged NSC apical surfaces on 3-mpf double transgenic Tg(gfap:hZO1-mKate2);Tg(mcm5:gfp) fish after fixation and IHC for SOX2, ZO1 and PCNA.Left: live-imaged tissue; right: fixed and immunostained tissue.The images on the right panels show the segmentation of the same group of cells. S2G.Comparison of AAs in live and fixed/immunostained samples.Scatter plot showing AA before fixation as a function of AA after fixation for the same cell.Linear regression line with 95% CI (slope = 0,89 and R squared = 0,92).The average AA ratio between fixed AA and live AA is calculated as follows: AA ratio fixed/live = (Fixed AA -Live-imaged AA) / (Liveimaged AA) x 100.In average, fixed AA are 5% larger than live-imaged AA.The statistical difference between the two segmented regions was assessed by a two-tailed non-parametric t-test (Mann-Whitney) (n.s.p-value = 0,6667) (n = 95 cells on 2 fish).Figure S3 S3D.Representation of all dividing tracks (n= 194, from 828 NSCs tracked in 3 fish), classified according to deltaA expression of MCs, and showing deltaA expression (deltaA pos NSCs: blue, deltaA neg NSCs: black) and AA (y axis, in µm 2 ) as a function of time (x axis, in days) (each dot is an imaging time point).Red arrowheads mark the different DCs for each track (division events post-quiescence phases, note that there can be several such events along the same track when DCs enter quiescence before dividing again).Green arrowheads mark the DCs from reiterative divisions (successive division events that occur without a quiescence phase in between).
1,103.2
2022-12-26T00:00:00.000
[ "Biology" ]
Gene expression analysis of flax seed development Background Flax, Linum usitatissimum L., is an important crop whose seed oil and stem fiber have multiple industrial applications. Flax seeds are also well-known for their nutritional attributes, viz., omega-3 fatty acids in the oil and lignans and mucilage from the seed coat. In spite of the importance of this crop, there are few molecular resources that can be utilized toward improving seed traits. Here, we describe flax embryo and seed development and generation of comprehensive genomic resources for the flax seed. Results We describe a large-scale generation and analysis of expressed sequences in various tissues. Collectively, the 13 libraries we have used provide a broad representation of genes active in developing embryos (globular, heart, torpedo, cotyledon and mature stages) seed coats (globular and torpedo stages) and endosperm (pooled globular to torpedo stages) and genes expressed in flowers, etiolated seedlings, leaves, and stem tissue. A total of 261,272 expressed sequence tags (EST) (GenBank accessions LIBEST_026995 to LIBEST_027011) were generated. These EST libraries included transcription factor genes that are typically expressed at low levels, indicating that the depth is adequate for in silico expression analysis. Assembly of the ESTs resulted in 30,640 unigenes and 82% of these could be identified on the basis of homology to known and hypothetical genes from other plants. When compared with fully sequenced plant genomes, the flax unigenes resembled poplar and castor bean more than grape, sorghum, rice or Arabidopsis. Nearly one-fifth of these (5,152) had no homologs in sequences reported for any organism, suggesting that this category represents genes that are likely unique to flax. Digital analyses revealed gene expression dynamics for the biosynthesis of a number of important seed constituents during seed development. Conclusions We have developed a foundational database of expressed sequences and collection of plasmid clones that comprise even low-expressed genes such as those encoding transcription factors. This has allowed us to delineate the spatio-temporal aspects of gene expression underlying the biosynthesis of a number of important seed constituents in flax. Flax belongs to a taxonomic group of diverse plants and the large sequence database will allow for evolutionary studies as well. Background Flax (Linum usitatissimum L.) is a globally important agricultural crop grown both for its seed oil as well as its stem fiber. Flax seed is used as a food source and has many valuable nutritional qualities. The seed oil also has multiple industrial applications such as in the manufacture of linoleum and paints and in preserving wood and concrete. The fiber from flax stem is highly valued for use in textiles such as linen, specialty paper such as bank notes and in eco-friendly insulations [1]. Flax belongs to the family Linaceae and is one of about 200 species in the genus Linum [2]. It is a self-pollinating annual diploid plant with 30 chromosomes (2n = 30), and a relatively small genome size for a higher plant, estimated at~700 Mbp [3,4]. Although flax demonstrates typical dicotyledonous seed development, there are species-specific differences compared to, for instance, Arabidopsis thaliana seed development. However, very little is known about genes expressed during flax seed development. Advancing this knowledge and comparison of gene expression profiles and gene sequences would provide new insights into flax seed development. Nutritionally, flax seed has multiple desirable attributes. It is rich in dietary fiber and has a high content of essential fatty acids, vitamins and minerals. The seeds are composed of~45% oil, 30% dietary fiber and 25% protein. Around 73% of the fatty acids in flax seed are polyunsaturated. Approximately 50% of the total fatty acids consist of α-linolenic acid (ALA), a precursor for many essential fatty acids of human diet [5]. Flax seed is also a rich source of the lignan component secoisolariciresinol diglucoside (SDG). SDG is present in flax seeds at levels 75 -800 times greater than any other crops or vegetables currently known [6,7]. In addition to having anti-cancer properties, SDG also has antioxidant and phytoestrogen properties [8]. Flax seed contains about 400 g/kg total dietary fiber. This seed fiber is rich in pentosans and the hull fraction contains 2-7% mucilage [9]. The other major constituent of flax seeds are storage proteins that can range from 10-30% [10]. Globulins are the major storage proteins of flax seed, forming about 58-66% of the total seed protein [11,12]. Improvement of flax varieties through breeding for various traits can be assisted by development of molecular markers and by understanding the genetic and biochemical bases of these characteristics [13,14]. The goal of this research was to develop a comprehensive genomics-based dataset for flax in order to advance the understanding of flax embryo, endosperm and seed coat development. We report the construction of 13 cDNA libraries, each derived from specific flax seed tissue stages, as well as other vegetative tissues together with the generation of ESTs derived from these libraries and the related assembled unigenes. We mined the resulting database with the goal of revealing new insights into the gene expression in developing seeds in comparison to that of vegetative tissues and other plant species. We show the usefulness of this database as a tool to identify putative candidates that play critical roles in biochemically important pathways in the flax seed. Specifically we analyzed gene expression during embryogenesis as related to fatty acid, flavonoid, mucilage, and storage protein synthesis and transcription factors. Seed development characteristics in flax Limited information is available regarding flax seed development, despite its economic importance. Since the seed is an economically important output of this crop, in this study, we performed a detailed analysis of embryogenesis and flax seed development. The flax seed consists of three major tissues: the diploid embryo and triploid endosperm as products of double fertilization, and the maternal seed coat tissue. Soon after fertilization, the seed is translucent and the embryo sac is upright within the integuments ( Figure 1A). The developing embryo is anchored at the micropylar end of the embryo sac. The thick, clear and fragile integuments of the fertilized ovule differentiate into the thin, dark and protective seed coat during seed development. Observation during the dissection process revealed that the endosperm initials, which formed at fertilization, undergo divisions to form a cellularized endosperm by the globular embryo stage ( Figure 1B and Figure 2H). The endosperm progressively increases in size up to the torpedo stage, after which time it begins to degenerate, presumably to make space for the rapidly elongating cotyledons and to provide nutritional support to the developing embryo. By the late cotyledon stage the majority of endosperm cells have been consumed, leaving a thin layer of endosperm on the inner wall of the seed coat of the maturing seed. The globular embryo ( Figure 1C, 1E) has a short suspensor consisting of just four cells that is nestled into the micropylar sleeve ( Figure 1D). As the embryo develops from the globular ( Figure 1E) to heart ( Figure 1F) and torpedo ( Figure 1G) stages, the increase in embryo size is largely due to growth of the cotyledons. This is in contrast to the Arabidopsis embryo where the increase in size is due to an increase in both the cotyledons and the embryonic axis [15]. The embryonic axis consists of the hypocotyl and radicle initials that are formed at the heart stage and it eventually differentiates to form a short peglike structure in the mature embryo. Whereas the tips of the cotyledon primordia are pointed in the late torpedo stages ( Figure 1H) they become rounded at the top in the cotyledon stage ( Figure 1I). The mature embryo ( Figures 1J, 1K) is primarily composed of two large cotyledons, and a relatively short embryonic axis. The cotyledons play a dual role nutritionally during germination and early seedling growth. They hold much of the seed storage reserves and become photosynthetic after germination. The mature embryo contains dormant leaf primordia initials and shoot and root apical meristems that will become activated after imbibition and during the germination of the seed (Figures 1L, 1M). A crosssection of the cotyledon shows differentiation of the cortical cells into a layer of palisade cells and the compact mesophyll cells. The mesophyll cells of the cotyledon and the parenchyma cells of the hypocotyl are filled with storage deposits ( Figure 1N, 1O) similar to those previously reported [16]. While flax seed development follows the general trends described for seeds of other model dicot species, there are some features that are different. For instance, unlike the Arabidopsis embryo, where the mature embryo is bent inside the anatropous seed, the flax embryo is positioned upright within the seed [15]. In the flax seed, the cotyledons take up the majority of the seed space with only a thin endosperm and seed coat left at maturity. This is in contrast to castor bean seeds where the endosperm is thick and the cotyledons nestled within the endosperm are thinner [17]. Sequencing 13 cDNA libraries provides insights into the flax transcriptome The cDNA libraries constructed in this study provide a broad representation of seed development (8 libraries) as well as 5 libraries for vegetative tissues. The 8 seed libraries were all from the most widely cultivated Canadian linseed variety CDC Bethune and comprised globular embryo, heart embryo, torpedo embryo, cotyledon embryo, mature embryo, seed coat from the globular stage, seed coat from the torpedo stage and pooled endosperm (globular to torpedo stage) ( CDC Bethune and the last library was for stem peels from cv. Norlin ( Figure 2K). The EST collection from single pass sequencing of the 3' end of the cDNA in plasmid clones had a median length of 613 nucleotides (nt). Each of these clones has been catalogued and stored at -80°C to allow for further studies. Full length cDNAs have also been identified for some clones by additional 5' end sequencing. Table 1 summarizes the distribution, quantity and quality of the ESTs obtained from the 13 libraries. After removal of vector sequences, rRNA sequences, sequences <80 nt, organelle sequences and masking for repeats, 261,272 sequences remained. The assembly of a final unigene set was done in two steps. First, ESTs from each library were assembled with EGassembler [18], resulting collectively in 27,168 contigs and 51,041 singletons. This collection of 78,209 contigs and singletons was reassembled with EGassembler. Thus a unigene set for each tissue source and a unified set of unigenes encompassing all the tissues were obtained. This second assembly process resulted in 15,784 contigs and 14,856 singletons, totaling 30,640 unigenes. The 30,640 unigenes identified here likely represents a major part of the flax seed transcriptome. Table 2 shows the distribution of the clusters, contigs, singletons and unigenes in the individual libraries. The length of the contigs varies from 102 to 3,027 nucleotides with a median length of 778 nt (data not shown). The sum of the lengths of the contigs plus singletons is 21.6 megabases, which represents 3% of the predicted 700Mb flax genome [3]. The EST distribution for each unigene among the 13 tissues and its predicted or putative Arabidopsis homologue is presented in Additional File 1. A queryable flax unigene database is available at http://bioinfo.pbi.nrc.ca/portal/flax/ and all the EST sequences are also deposited in GenBank (Table 3). Of the 30,640 unigenes, 23,418 (76.4%) were identified as having significant homology with Arabidopsis gene sequences. The Arabidopsis genome is~157 Mbp [19] and has a transcriptome of~27,000 genes [20] and our analysis hints that flax potentially has a larger transcriptome than Arabidopsis. While our libraries do not give complete coverage of the flax vegetative tissues, they can be used as minimum number to estimate the size of flax transcriptome. GO annotation and functional categorization The unigene collection of 30,640 contigs and singletons was analyzed using the BLASTX algorithm against the UniProt-plants and TAIR databases. The unigenes that showed significant homology to known genes (E-value ≤ e-10) against UniProt-plants were selected for Gene Ontology (GO) annotation and further mapping of the GO terms to TAIR database which is manually and computationally curated on a ongoing basis [21]. The values generated for the different GO-categories were used to generate the classification based on molecular functions, biological processes and cellular components ( Figure 3). Based on the BLAST analysis in TAIR, 23,418 unigenes showed significant homology to Arabidopsis genes and these are listed in a spreadsheet (Additional File 1; http://bioinfo.pbi.nrc.ca/portal/flax/) along with the distribution of ESTs for each unigene from the 13 tissue libraries. Our analysis suggests that the different GO-categories are well represented in our unigene dataset indicative of a broad coverage of expressed genes in the flax genome. Hierarchical cluster analysis of flax tissue based EST collections In order to compare the gene expression profile in different tissues, the entire set of 261,272 EST sequences was subjected to hierarchical cluster analysis using the software HCE3.5 [22] (see Methods). Amongst the parameters required for hierarchical cluster analysis, we selected the average linkage method and the Pearson The last column states how many of the contigs were present in only one cDNA library, indicating potential tissue specific expression. correlation coefficient for the similarity/distance measure, a technique which has been widely used in microarray analysis [23]. The results are shown in Figure 4. The analysis shows that in general gene expression is most closely related in tissues that are developmentally related and connected. For example, globular (GE) and heart (HE) embryo stages are most closely related, followed closely by the torpedo stage (TE). The maturing embryos, viz., cotyledon (CE) and mature (ME) stages clustered together but were distantly placed from the early stage embryos. The two seed coat stages (GC and TC) also shared a relatively high degree of similarity to each other. Gene expression in the pooled endosperm tissue (EN) from early developing seed stages shared some similarity with early embryonic stages but was more distant from the seed coats and maturing embryos. It is interesting to note that the CE and ME stages cluster away from the early seed tissues (GE, HE, TE, GC, TC and EN) and to a lesser extent from other non-seed tissues viz., (ES, LE, FL, ST) which is indicative of the distinct seed maturation program that is occurring in the later stages of embryo development. As the stem peel (PS) did not contain all of the tissues normally present in whole stems (ST), and was enriched for the phloem and phloem fiber cells [24], the PS gene expression profile did not cluster with ST, and as expected was distantly placed from the rest of the vegetative tissues and seed tissues. Whole stems (ST) and etiolated seedlings (ES) showed a high degree of similarity, possibly due to their polysaccharide composition. Both whole stems and etiolated seedlings are likely to be particularly enriched in xylem tissues, the secondary walls of which produce polysaccharides different from those found in the pectin-enriched phloem fibers in (PS), seed coats (GC, TC), or the primary walls of developing embryos [25]. Taken together, this analysis showed three distinct patterns of relatedness of gene expression among the 13 tissues: early seed stages, the maturing embryo stages and the juvenile vegetative tissues (ES, ST and LF). Nearly a fifth of the identified transcriptome is apparently unique to flax To identify the degree of potential homology of the flax unigenes shared with other plant species, we performed BLASTX analysis against the proteomes representing the six fully sequenced and annotated genomes of Arabidopsis, Oryza sativa (rice), Sorghum bicolor (sorghum), Vitis vinifera (grape), Populus trichocarpa (poplar) and Ricinus communis (castor bean) (see Methods). In general, the deduced flax polypeptides are more similar to those of poplar and castor bean than to grape, Arabidopsis, sorghum or rice (Table 4). This is consistent with the taxonomic grouping of flax, poplar and castor bean within the order Malpighiales [26]. The order Malpighiales, which is a large diverse grouping of 42 families containing several economically important species, is hypothesized to have diverged within a relatively short time frame and the taxonomic relationship of families within this order is poorly resolved. However, genome sequencing of poplar [27], castor bean [28], cassava [29] and large EST libraries from other species within this order including flax (this study) will likely aid in molecular systematic studies to address broader phylogenetic relationships between these families. Whereas 66% of the unigenes (20,251) had hits in all six species, 16.8% (5,152) of the unigenes had no hits in any species, indicating that they may be flax specific genes. As one of the main objectives of this study was to gain a better understanding of what happens in the flax seed as it develops, we further analyzed the EST libraries for transcription factors with specific roles in embryo and seed development (Additional File 2). The establishment of the adaxial and abaxial polarity during cotyledon primordia differentiation at the heart stage of embryo development is specified by the HD-ZIPIII family, ASYMMETRIC LEAVES1 (AS1) (adaxial) and YABBY, KANADI families (abaxial) respectively [31]. ESTs corresponding to adaxial and abaxial polarity specifying TFs are expressed from globular stage onwards with maximum number of ESTs in the heart stage when the cotyledon primordia are specified ( Figure 6; Additional File 2). Key embryogenesis regulators are present in the EST collections LEAFY COTYLEDON (LEC) genes LEC1, LEC1-like (L1L), LEC2 and FUSCA3 (FUS3) are master regulators of embryogenesis that are primarily expressed throughout seed development, and ectopic expression of these TFs results in somatic embryogenesis or embryonic characteristics being overlaid on vegetative organs [32][33][34][35]. ABI3 is expressed only during seed maturation and is a key regulator of seed maturation processes such as seed dormancy and storage reserve accumulation [36]. AGAMOUS-LIKE15 (AGL15), a MADS domain containing TF is primarily expressed during Arabidopsis seed development and its ectopic expression increases the competency of cells to respond to somatic embryogenesis induction conditions [37,38]. In Arabidopsis, AGL15 is directly upregulated by LEC2 [39]. In addition, LEC2, FUS3 and ABI3 have all been demonstrated to be direct targets of AGL15 [40]. Examination of flax unigenes showed seed-specific enriched expression of L1L, LEC2, FUS3, ABI3 and AGL15 (Figure 7; Additional File 2). Only one EST with similarity to LEC2 was identified. The absence of LEC1 and the presence of the closely related L1L in seed tissues have also been observed for scarlett runner bean [33]. The identification of ESTs in seed-specific libraries that are pertinent to seed maturation program lends support to the quality of these libraries. Mining for biochemical pathway-specific ESTs that make flax seed nutritionally rich The flax seed contains many nutritionally important compounds such as proteins, fatty acids, lignans, flavonoids and mucilage. To determine the usefulness of the EST resources generated in this study, we queried for genes involved in the synthesis of the above noted seed components. In order to identify potential candidate enzymes amongst many flax unigenes, the Additional Files 3 and 4 provide the first step to narrow down putative flax candidates by examining the timing and distribution of ESTs across different tissues. Seed storage proteins Much of the proteins in flax seeds are storage proteins that exist within protein storage vacuoles and these proteins constitute 23% of the whole flax seed [41]. Storage proteins in flax seed are made up of~65% globulins and 35% albumins [11]. Conlinin is a 2S albumin and cupin and cruciferin are 11S and 12S globulins, respectively. Our EST data correlates the expression of the genes coding for the storage proteins with the reported levels of proteins in flax seeds ( Figure 8A; Additional File 3). Globulin encoding genes were expressed at much higher levels than those encoding the albumin and were observed in the later cotyledon (CE) and mature (ME) stages of embryo development. Interestingly, small numbers of ESTs for all the storage proteins were identified in young seed coats, primarily at the torpedo stage ( Figure 8A; Additional File 3). This is in agreement with the observation that a conlinin gene promoter is active in early stages of seed coat development [42]. Pooled endosperm from the corresponding seed coat stages did not identify any storage protein ESTs. These observations suggest that the seed coat does have a role in storage protein synthesis. Given that the seed coat is a major part of the overall mass in developing seeds, the seed coat might be a transient source of protein for developing embryos. Fatty acids and oil body formation Mature flax seeds consist of approximately 43% oil, mostly in the form of triacylglycerols (TAGs) within oil bodies located in the embryo [11]. In order to study the timing and source of lipid synthesis within the developing seeds, enzymes representing the four key steps of fatty acid synthesis were studied: acyl-chain elongation, termination, desaturation and TAG synthesis [43,44] ( Figure 8A, Figure 9; Additional File 3). Based on the preponderance of ESTs representing the 3-ketoacyl-acyl carrier protein synthases (KAS1, KAS2 and KAS3) in the various tissues, it appears that acyl chain elongation activity increases during the torpedo stage and that the embryo, endosperm and seed coat all contribute to this activity in the seed ( Figure 9A). Although the number of ESTs representing termination of elongation by fatty acyl-ACP thioesterases (FATA and FATB) was lower than KAS ESTs, this activity also appears to peak during the torpedo stage ( Figure 9B). Within the developing embryos, fatty acids are transferred onto a glycerol backbone to form triacylglycerols by the activity of diacylglycerol acyltransferase (DGAT). TAGs are stored in oil bodies, the outer membrane of which is a spherical phospholipid monolayer interspersed with the protein oleosin [44]. ESTs representing DGAT were found in quantities similar to the FATA and FATB ESTs, i.e. in very low quantities. The key difference is that this activity seems to peak later, during the cotyledon embryonic stage rather than the torpedo stage ( Figure 9D). Also, while termination of elongation and release of free FAs appears to occur in both seed tissues as well as in some of the vegetative tissues, DGAT expression in vegetative tissues is too low to detect with the EST counts. Desaturation is the key step that results in the desirable omega-3 and omega-6 fatty acids [44]. This seems to occur later during seed development as the spike in the number of ESTs representing the Fatty Acid Desaturases (FAD) 2, 3, 5 and 8 occurs within the mature embryo ( Figure 9C). One of the omega-3 fatty acids found in flax, alpha-linolenic acid (ALA, 18:3n-3), constitutes up to 55% of the total seed oil [41]. ALA is an essential fatty acid in human diet and it is converted to eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) which are then incorporated into membrane phospholipids. Some fatty acids are used in plant membrane synthesis, wax formation and pigmentation. The repertoire of lipid synthesis ESTs found in stem, stem peel and flowers provide a basis to probe these processes in these tissues (Figures 8A and 9). Oleosins, proteins associated with oilbodies, are known to stabilize them by preventing the coalescence of the lipid particles during seed germination [45]. In our datasets, the expression of putative homologs of Arabidopsis Oleosin 1, 2 and 3 genes was observed in the embryo beginning at the torpedo stage (TE), with greater levels in mature stage (ME) (Figure 8A; Additional File 3). This also coincides with the expression in the CE and ME stages of the FAD desaturases that are involved in the formation of the omega-3 and omega-6 fatty acids. Oleosin gene expression has been shown to be regulated in part by ABI3 in Arabidopsis [46]. There is also a correlation of ABI3 with oleosin ESTs at the torpedo and mature embryo stages (Figure 7 and 8A; Additional File 2 and 3), indicating that the EST data is reflective of the underlying genetic and biochemical programs. Lignans Flax is a rich source of secoisolariciresinol diglycoside (SDG). SDG is converted by intestinal bacteria to the so-called mammalian lignans enterodiol and enterolactone. SDG has phytoestrogen, antioxidant, and anticancer activities [8]. Lignans present in the seed coat of flax and are derived from coniferyl alcohol by the initial action of oxidases and dirigent proteins that yield pinoresinol [47]. Sequential reduction of pinoresinol by pinoresinollariciresinol reductase (PLR) results in the formation of SDG [48]. Analysis of our flax unigene collection identified several candidates corresponding to dirigent proteins and PLR that are predominantly expressed in the globular and torpedo stage seed coats ( Figure 8B; Additional File 4). Dirigent proteins had a higher number of EST hits in globular stage seed coat which corresponds with its early role in the lignan biosynthetic pathway, whereas pinoresinol-lariciresinol reductase, which acts later in the pathway, is expressed in the seed coat at the torpedo stage. Flavonoids Flavonoids constitute a major class of plant phenolics. Flax seeds are a rich source of flavonoids, which includes flavonols and anthocyanidins [49]. The flavonoid biosynthesis branch starts with the formation of chalcone, a reaction catalyzed by chalcone synthase (CHS), followed by the synthesis of flavanone by chalcone isomerase (CHI). Dihydroflavonol reductase (DFR) activity is the committing step for leucoanthocyanidin synthesis and proanthocyanidin, anthocyanidin and anthocyanin synthesis follows this step [50]. The key enzymes in the flavonoid synthesis pathway, viz., CHS, CHI and DFR are expressed during flax seed development especially in the seed coat tissues as shown by the number of ESTs ( Figure 8B; Additional File 4). BANYULS (BAN) gene of Arabidopsis encodes an anthocyanidin reductase in the anthocyanidin branch that produces cis-3-flavan-3-ol which has known health benefits in humans [51]. ESTs representing BAN are present in the embryonic and seed coat tissues of flax indicating that flax seeds could be a likely source of cis-3-flavan-3-ols ( Figure 8B; Additional File 4). Mucilage synthesis and secretion During flax seed development, the ovule integuments differentiate and form specialized cell types which include the seed coat epidermis that stores mucilaginous compounds. The chemical composition of flax seed mucilage has been investigated because of its benefits to human health. The pectin rhamnogalacturonan I (RG I) is the primary constituent of seed mucilage in Arabidopsis and several other species, whereas flax seed mucilage contains a mixture of neutral arabinoxylans (75%) and RG I (25%) [52][53][54]. In the mature seed, the cells of the outer epidermal layer of the seed coat are transformed into mucilage secretory cells (MSCs) that release mucilage upon seed hydration. In Arabidopsis, MUCILAGE-MODIFIED4 (MUM4) gene encodes Rhamnose Synthase2, an enzyme that catalyzes the synthesis of RG I [55], whereas MUM2 encodes a beta-galactosidase that enables the hydration properties of the mucilage by modifying the RG I side chains [56]. Furthermore, AtBXL1 gene, which encodes a beta-xylosidase/alpha-arabinofuranosidase, is essential for the release of mucilage by degradation of the arabinan side chains in the mucilage and/or cell wall of the mucilage secretory cells [57]. Genes encoding rhamnose synthase and beta-xylosidase are represented in the GC and TC tissue specific ESTs indicating that the mucilage synthesis and secretion pathway observed in Arabidopsis is represented in flax and the expression of corresponding genes are enriched specifically in seed coat tissues ( Figure 8B; Additional File 4). However, ESTs corresponding to the rhamnose synthase did not include the ortholog of Arabidopsis MUM4 gene, suggesting the possibility that there is some diversity of this mucilage synthesis pathway in flax. Galacturonosyltransferases that are involved in the polymerization of galacturonic acid [58] to form pectic RG I were also well represented in GC and TC tissue specific manner, indicative of their conserved roles in the synthesis of mucilage in the seed coat ( Figure 8B; Additional File 4). Interestingly, ESTs corresponding to the putative homologs of the AtBXL2 gene, a member of the small gene family that includes AtBXL1 [57], were expressed at very high levels in the seed coat tissues suggesting their role in the quick and uniform release of mucilage from the flax seed coat upon imbibition ( Figure 8B; Additional File 4). A putative flax ortholog of AtBXL1 is also one of the most abundant ESTs identified in a previous report of cDNAs from fiber-bearing flax tissues [59]. Conclusions We have developed a comprehensive EST resource for flax representing developmental stages of specific seed tissues, some vegetative and reproductive tissues. These resources include publicly available EST sequences at GenBank (Table 3), a queryable flax unigene database (http://bioinfo.pbi.nrc.ca/portal/flax/) and unigene distribution across libraries (Additional File 1). The datasets developed in this study enhance the genomic resource base for flax, an important crop. These resources can contribute to gene discovery and development of expanded molecular marker sets for breeding. Additionally, the unigene set developed in this study will contribute to the annotation and assembly of the whole flax genome sequence. The recently published flax-specific microarray based on EST sequences obtained from a fiber focused study while the present manuscript was under preparation provides a complimentary genomic tool for flax gene expression analysis [60]. However, having the EST resources of the developing seed partitioned into embryo, endosperm, and seed coat compartments relative to vegetative tissues in our study allows further refinement into determining the involvement of genes in temporally and spatially specific metabolic pathways. Analysis of our datasets indicates good representation of biological processes related to seed development. 7,222 flax unigenes did not have homologs to the genes of the model species Arabidopsis and there were 5,152 unigenes that do not show any homology to plant species in UniProt. These 5,152 unigenes therefore likely represent flax-specific genes. Many of these unidentified genes were broadly distributed whereas some were specific to a single tissue. Further studies of these will provide new insights into flax-specific programs. Plant growth conditions and tissue collection Breeder seed (F11) of Linum usitatissimum cv CDC Bethune was selfed for 7 generations (F18) as single plants in the Phytotron at the University of Saskatchewan. F19 seeds were germinated and grown in a growth chamber using a daily cycle consisting of 16 hours of light (23°C) and 8 hours of dark (16°C). Tissue samples were collected and frozen immediately in liquid nitrogen. The leaf, stem and flower samples were each collected from more than 10 individual plants. Dissection of 5,000 flax seeds was performed in order to isolate sufficient endosperm, embryonic, and seed coat tissues for creating the cDNA libraries. Five stages of embryos representing globular, heart, torpedo, cotyledon, and mature stages were isolated from developing seeds. Seed coat samples were collected from globular and torpedo embryo stages. Endosperm tissues were pooled from seeds containing globular to torpedo embryo stages. Etiolated seedlings were generated by incubating seeds on MS medium plates in the dark for four days and prior to harvesting, the seed coats were removed. The stem peel tissue consisting of epidermis, cortical tissues, phloem, developing fibers, and cambial tissue was prepared from stems of four week-old Linum usitatissimum L. cv Norlin germinated and grown as described previously [24]. RNA isolation and cDNA library construction The stem peel library (PS) was constructed using the Superscript Plasmid System with Gateway Technology for cDNA Synthesis and Cloning (Invitrogen, Carlsbad, CA) [24]. cDNAs were directionally cloned in pCMV-SPORT6 (Invitrogen) and transformed in chemically competent DH5α-FT E. coli. For the remaining 12 libraries, total RNA was isolated using the RNeasy Plant Mini Kit (Qiagen, Cat. No. 74904). On-column DNase digestion was performed using the RNase-free DNase set (Qiagen, Cat. No. 79254). Approximately 2 μg of total RNA from the tissues was used to construct each cDNA library. These 12 libraries were constructed using the Creator SMART cDNA library construction kit (Clontech, Cat. N. 634903). The 8 libraries derived from seed tissues (globular, heart, torpedo, cotyledon and mature embryos, as well as endosperm and globular and torpedo stage seed coats) were prepared as per the manual instructions and are in the pDNR-lib vector (Clontech). Two modifications to the manual were made during construction of the cDNA libraries for leaf, stem, flower and etiolated seedling. First, the cDNA size fractionation was performed on agarose gel instead of CHROMA SPIN-400 column supplied by the kit. The SfiI digested cDNAs were loaded into a 1% TAE agarose gel, and run for about 2 cm. The cDNA samples were excised from the agarose gel and purified using the QIAquick Gel Extraction kit (Qiagen, Cat. No. 28704). Second, a modified pBluescript II SK(+) vector was used. A ccdb gene, with SfiI sites at both ends, was inserted between the EcoRI and XhoI of pBluescript II SK(+). This modified vector was then digested with SfiI, agarose gel purified, and used for ligation with the SfiI digested cDNA samples. Ligations to construct the libraries were performed according to the Creator SMART manual. EST sequencing and analysis The libraries were spread onto the LB medium plates and cultured at 37°C overnight. Individual clones were picked into 96 or 384-plates manually or automatically by a Colony Picker (CP-7200, Norgren Systems). The ESTs were sequenced on the ABI 3730xl DNA Analyzer (Applied Biosystems) at the DNA sequencing facility of the National Research Council-Plant Biotechnology Institute (NRC-PBI, Saskatoon, SK, Canada). The HE, TE and ME libraries were sequenced in two batches (Table 3). A total of 274,278 sequences were obtained. The reader can refer to Table 1 for the tissue distribution. The assembling process of EGassembler was used. Details are given in the EGassembler tutorial [18] (http://egassembler.hgc.jp/cgi-bin/eassembler4.cgi?pmo-de=help&i_param=tutorial). In the first step, the sequences were cleaned and ones with length of less than 100 bases were removed. The following steps consisted of masking the repeats, vector and organelle sequences. Masked nucleotides were removed and any resulting sequences less than 80 bases in length were also removed. The first clustering process was performed for each separate library. The resulting 78,209 sequences (27,168 contigs and 51,041 singletons) were then merged, and reassembled, resulting in 30,640 unigenes (15,784 contigs and 14,856 singletons). These unigenes were reallocated back into their respective individual libraries. All EST sequences and unigenes have been deposited at http://bioinfo.pbi.nrc.ca/portal/ flax/. The clustering of the ESTs were performed using Hierarchical Clustering Explorer 3.5 sofware (http:// www.cs.umd.edu/hcil/hce/power/power.html) [22]. The number of EST reads for each unigene in each of the 13 different tissues was used as the input data for HCE3.5 software with parameters set for Pearson correlation coefficient for similarity/distance measure and average linkage method for hierarchical clustering. Microscopy Clearing Fertilized ovules were cleared for 2 days in chloral hydrate solution (8:1:2 chloral hydrate -glycerol -water w/v/v) and viewed with a compound microscope (Leica DMR) using Nomarski optics. Scanning Electron Microscopy Samples were fixed in 3% glutaraldehyde, post-fixed in 1% osmium tetroxide and dehydrated in a graded acetone series as described [61]. Samples were mounted on aluminum stubs and coated with gold in an Edwards S150B sputter coater. Observations were made with a Phillips 505 scanning electron microscope at 30 kV and recorded on Fujifilm FP-100b professional film. Images were scanned and treated in Adobe Photoshop CS (Adobe Systems, San Jose, California) to improve the contrast and place scale bars.
8,094.8
2011-04-29T00:00:00.000
[ "Agricultural and Food Sciences", "Biology", "Environmental Science" ]
Coaxially printed magnetic mechanical electrical hybrid structures with actuation and sensing functionalities Soft electromagnetic devices have great potential in soft robotics and biomedical applications. However, existing soft-magneto-electrical devices would have limited hybrid functions and suffer from damaging stress concentrations, delamination or material leakage. Here, we report a hybrid magnetic-mechanical-electrical (MME) core-sheath fiber to overcome these challenges. Assisted by the coaxial printing method, the MME fiber can be printed into complex 2D/3D MME structures with integrated magnetoactive and conductive properties, further enabling hybrid functions including programmable magnetization, somatosensory, and magnetic actuation along with simultaneous wireless energy transfer. To demonstrate the great potential of MME devices, precise and minimally invasive electro-ablation was performed with a flexible MME catheter with magnetic control, hybrid actuation-sensing was performed by a durable somatosensory MME gripper, and hybrid wireless energy transmission and magnetic actuation were demonstrated by an untethered soft MME robot. Our work thus provides a material design strategy for soft electromagnetic devices with unexplored hybrid functions. angles for different core-sheath fibers. Unlike the high-modulus copper wire that constrains the deformation, the liquid metal core with high fluidity does not evidently impair the magnetically induced deformation. The MME fiber thus has similar flexibility as its hollow counterpart (filled with air). The remanence increases with the increase of NdFeB content; a high content of NdFeB particles can help enhance the magnetically actuated deformability of the MME fiber/structure. However, with the increase of NdFeB contents, yield stress of the composite ink also increases, making it hard to print the composite ink; also a high NdFeB content also reduces the flexibility of the MME fiber/structure. As shown in Supplementary Fig. 4c and d, as the NdFeB content increases, elastic modulus of printed MME fibers increases and the tensile strength decreases. After systematic investigation, the composite ink was optimized to have a NdFeB particle content of 50 wt% (the mass ratio of PDMS to NdFeB particles is 1:1). Supplementary Figure 5. Schematic diagram of the magnetization process. In step Ⅰ, the cured unmagnetized MME structure was placed in a mold with a predesigned geometry. In step Ⅱ, the unmagnetized MME structure was elastically deformed by the mold and placed in a pulsed magnetic field Bmag (about 3 T). Under the action of a pulsed magnetic field Bmag, the MME structure would reache its saturation magnetization, with the magnetization direction same as the pulsed magnetic field Bmag. In step Ⅲ Supplementary , after the MME structure is released from the mold and re-coiled to its initial deformation free geometry, a magnetization profile m would be imparted onto the MME structure. In this process, magnetization profile m of the MME structure can be controlled by designing the geometry of the mold and adjusting the pulsed magnetic field Bmag. Supplementary Figure 6. Schematic diagram of the magnetically actuated deformation of a magnetized MME structure. Interaction between the external actuation magnetic field Bact and the magnetization profile m will generate a spatially varying magnetic torque Tm, across the MME structure. Tm would deform the MME structure till an equilibrium state is achieved. The direction of the magnetization profile m tends to coincide with the direction of the actuation magnetic field Bact. As h increases, overlapping between adjacent layers decreases, affecting the structure fidelity; meanwhile, a smaller h would affect MME structures' conductivity. At h = 800 μm, the 3D MME structure with high structure fidelity and electrical conductivity can be fabricated. and the magnetic field module for the finite element models of the MME fiber and MME coil structure. Supplementary (f-g) Boundary condition settings for the solid mechanics module and the magnetic field module in the finite element model of the magnetically actuated deformation process of the MME fiber and MME coil structure. The magnetization and magnetic actuation deformation of MME structures were analyzed by COMSOL Multiphysics 6.0 (COMSOL Inc., Sweden). The multi-physics models of FEA were built in a 500×500×500 mm 3 cube space, which mainly consisted of air, coil (180 mm in diameter), and MME structure. The tetrahedral grid was used to divide the model, and the grid size ranged from 0.16 mm to 2.2 mm, as shown in Supplementary Fig. 25a-b. The FEA of magnetization includes two analysis steps: the calculation of the structure bending process under external volume force using the solid mechanics module in stationary and the analysis of the MME structure magnetization profile under 3 T pulsed magnetic field using the magnetic field module in time dependent. The boundary condition settings for the solid mechanics module and the magnetic field module for the finite element models of the MME fiber and MME coil structure as shown in Supplementary Fig. 25d. The length of the fixed surface is 9 mm, and the volume force is applied to deform the MME fiber and MME coil structure. The folded MME structure was magnetized using a pulse current to the coil, which could generate a 3 T pulse magnetic field in the magnetism module, as shown in Supplementary Fig. 25e. The pulsed magnetic field is shown in Supplementary Fig. 25c. After calculating the magnetization profile m in the MME structure, the magnetic actuation deformation was analyzed by coupling the magnetic field-no current module with the solid mechanics module ( Supplementary Fig. 25f). The magnetic actuation deformation of each step was analyzed by interactive calculation of magnetic force using the magnetic field-no current module and the deformation using the solid mechanics module. The magnetic force on the MME structure was calculated by integrating the magnetic stress tensor induced by the interaction between actuation magnetic field Bact and the calculated magnetization profile m. The boundary condition settings of the MME fiber and MME coil structure as shown in Supplementary Fig. 25g. Finally, the final magnetic actuation deformation of the MME structure could be obtained by iterative calculation in multi-steps. Supplementary Figure 26. Measurement of the magnetic flux density B at 1 mm from the MME fiber with a gauss meter. Supplementary Figure 27. The L-shaped mold for magnetizing a MME fiber. Supplementary Figure 28. Quasi-static analysis of the MME fiber. The bending moment acting on an infinitesimal section of a MME fiber under equilibrium-state deformation is shown in the inset. Therefore, On the basis of the theoretical framework developed for ferromagnetic soft materials 30 , we provide the fundamental equations for quantitative description of the deformation of MME structure upon magnetic actuation. The vector m represents the magnetization of the infinitesimal section of the MME fiber in the initial undeformed state. The magnetic moment Mnet is expressed as: Where A is the cross-sectional area. Along the neutral axis of the MME structure, it can be segmented into infinitesimal coaxial elements. where E1 and E2 are Young's moduli of the sheath and the liquid metal core, respectively. I1 and I2 are the moments of inertia of the sheath and the liquid metal core, respectively. The curvature k(s) at position s can be calculated as: where θ(s) is the rotation angle at position s, which can describe the target shape of the MME structure programmed under the continuous magnetization profile m, the total rotation angle θ(s) at x is expressed as:
1,659
2023-07-22T00:00:00.000
[ "Materials Science" ]
Geometry of Periodic Monopoles BPS monopoles on $\mathbb{R}^2\times S^1$ correspond, via the generalized Nahm transform, to certain solutions of the Hitchin equations on the cylinder $\mathbb{R}\times S^1$. The moduli space M of two monopoles with their centre-of-mass fixed is a 4-dimensional manifold with a natural hyperk\"ahler metric, and its geodesics correspond to slow-motion monopole scattering. The purpose of this paper is to study the geometry of M in terms of the Nahm/Hitchin data, i.e. in terms of structures on $\mathbb{R}\times S^1$. In particular, we identify the moduli, derive the asymptotic metric on M, and discuss several geodesic surfaces and geodesics on M. The latter include novel examples of monopole dynamics. Introduction This paper deals with periodic BPS monopoles, namely Yang-Mills-Higgs fields (Φ, A j ) on R 2 ×S 1 satisfying the Bogomolny equations. A useful tool for understanding systems of this type is the generalized Nahm transform. The best-known Nahm transform describes BPS monopoles on R 3 in terms of solutions of a set of ordinary differential equations, namely the Nahm equations. But this is part of a more general picture (which also includes the ADHM transform for self-dual Yang-Mills instantons): a generalized Nahm transform, which may be understood in terms of the reciprocity between self-dual Yang-Mills equations on dual 4-tori [1,2]. Suitable rescalings of the tori (and corresponding dimensional reductions of the self-duality equations) then give, as special cases, the ADHM transform and the Nahm transform for monopoles on R 3 , as well as several other systems including the present one. The general scheme suggests that the Nahm transforms of BPS monopoles on R 2 × S 1 are solutions of the Hitchin equations on the cylinder R × S 1 (satisfying appropriate boundary conditions), where the two circles are dual to each other. This was confirmed in [3], where many of the details were worked out. In this paper, we focus on the case of periodic 2-monopole fields with gauge group SU (2). Such fields may also be visualized as a pair of infinite monopole chains. By contrast with the case of monopoles in R 3 , the system as a whole is infinitely massive, so its centreof-mass and overall phase are parameters which must be kept fixed [4]. However, the relative separation and phase of the monopoles are free to vary. So we focus on the relative moduli space M of solutions: this is a 4-dimensional manifold equipped with a natural hyperkähler metric. The geodesics in M are of particular interest, as they correspond to slow-motion dynamics of the system. Our purpose is to study the geometry of M in the Nahm-transformed picture, ie. in terms of structures on the cylinder R×S 1 . The dynamics of monopoles in this system is different from that of the well-known case of monopoles on R 3 , owing to the periodicity. Although the metric on M is not known explicitly, one can identify some geodesics as fixed-point sets of discrete symmetries, and this provides examples of such novel dynamics. For the space R 2 × S 1 on which the monopole fields (Φ, A j ) live, we use coordinates (x, y, z), with z having period 2π. So the gauge potential A j and the Higgs field Φ are smooth functions of (x, y, z), periodic in z, and they satisfy the Bogomolny equations 2D j Φ = −ε jkl F kl , where F kl denotes the gauge field. The monopoles are located, roughly speaking, at the zeros of Φ. For monopole fields of charge 2, the boundary behaviour in the non-periodic directions, ie. as ρ → ∞ where x + iy = ρ e iχ , is locally in some gauge. Here C is a positive constant which determines the monopole size, or more accurately the ratio between the monopole size and the z-period. The C → 0 limit corresponds to monopoles on R 3 ; the opposite extreme C 1, where the monopoles spread out, is discussed in more detail in [5]. Note that the system is not rotationallysymmetric about the z-axis: this is reflected in the boundary condition for A z . It does, however, admit the discrete symmetry of rotations by π about the z-axis, which can be compensated by a periodic gauge transformation. Such monopole fields (or rather, the subset of centred monopole fields) correspond, via the Nahm transform [3], to the following Nahm/Hitchin data on R × S 1 . As coordinates on R × S 1 , take r ∈ R and t with period 1. Let s denote the complex coordinate s = r + it. The fields on R × S 1 consist of a gauge potential (a r , a t ) with the gauge group in this case being SU (2), and a complex Higgs field φ in the adjoint representation. The gauge field f rt is simply written as f . These fields satisfy the Hitchin equations Here Dsφ := ∂sφ + [as, φ], and φ * denotes the complex conjugate transpose of φ. The field φ is constrained by where K is some complex constant, and the boundary condition is f → 0 as r → ±∞. In general, for a periodic n-monopole system, the Nahm/Hitchin data would be u(n)valued. In our case, n = 2. The significance of the fields (φ, as) being su(2)-valued, rather than u(2)-valued, is that the corresponding two-monopole system is centred, with its centre-of-mass fixed at the point (0, 0, 0). As noted above, we are only interested in the relative separation of the monopoles, and so we restrict to su (2). Actually, one may equally regard the centre-of-mass as being located at the point (0, 0, π), since the distinction between these two possibilities is ambiguous in view of the z-periodicity. There is a natural map τ which translates a monopole solution by π in the z-direction, and which therefore interchages these two centring points. The corresponding map on (φ, a r , a t ) consists of gauging by an antiperiodic gauge transformation, ie. by Λ(r, t) ∈ SU(2) with Λ(r, 1) = −Λ(r, 0) and Λ t (r, 1) = −Λ t (r, 0). This preserves the periodicity of (φ, a r , a t ) as well as the equations (1) and the boundary conditions. Clearly the fixed points of τ consist precisely of the fields which are 1-monopole chains in disguise, ie. periodic 1-monopole solutions taken over two periods. There are exactly two such solutions, up to gauge-equivalence: their Nahm/Hitchin data can be written as and they have K = ±2 respectively. No other solutions of (1), with these boundary conditions, are known explicitly. One way of solving the equations numerically is by minimizing an "energy" functional, and we implemented this in order to get an idea of what the fields look like. Briefly, the details are as follows. Define The r-cutoff L has to be finite for the integral to converge, but in practice it does not have to be large, since the solutions are well localized. Then there is a Bogomolny-type bound on E L , and this bound is saturated if and only if the Hitchin equations (1) are satisfied. So minimizing E L numerically gives a solution. In the next section, we identify the four moduli in terms of the fields (φ, as), at least in the asymptotic region of the moduli space. Then in Section 3, we derive the asymptotic metric on M by direct calculation, and see that it agrees with the metric previously derived by considering the forces between monopoles [4]. Section 4 describes various geodesic surfaces in M, and this is followed by a discussion of geodesics and the associated monopole dynamics in Section 5. The moduli The moduli space M is 4-dimensional, and the aim here is to describe the four moduli in terms of the Nahm/Hitchin data. Two of the moduli are the real and imaginary parts of the complex number K appearing in the constraint (2). We shall describe the remaining two moduli in the asymptotic region of M, which is where |K| 1; this corresponds to the monopoles being widely-separated. Numerical solutions, obtained as outlined above, indicate that the data (φ, as) then resemble two well separated lumps on the cylinder, located at the zeros ±s 0 of det φ. By this we mean that the gauge field f is close to zero except at these two points, and the peaks at s = ±s 0 become more concentrated as |K| increases. In particular, since f ≈ 0 in the central region r ≈ 0, it makes sense to consider the t-holonomy there, and we define an angle θ by The two monopoles are located [3,5,6] on x + iy = ±C √ −K, and θ determines their z-offset: the z-coordinates of the monopoles are z = ±θ respectively, as we shall see in the next section. The sign ambiguity in the square root reflects the indistinguishability of the monopoles. The equation (4) only determines θ up to sign, but we may remove this ambiguity by using the Higgs field φ, and regard θ ∈ (−π, π] as periodic with period 2π. In fact, θ is really a local coordinate on a Z 2 -twisted circle bundle over the asymptotic K-space, and the definition of its sign is a matter of choice. Here is one particular scheme, using the value φ 0 of φ at r = t = 0. The quantity ξ = −i tr(U 0 φ 0 ) is a gauge-invariant complex number. If θ = 0, π, then ξ is non-zero, and it follows from • if Re(K) ≥ 0, then Im(ξ) = 0, and we can define sgn(θ) = sgn(Im(ξ)); • if Re(K) < 0, then Re(ξ) = 0, and we can define sgn(θ) = sgn(Re(ξ)). If we write K = |K|e 2πiη , then θ changes sign as η goes from 0 to 1, ie as one goes around a loop in the asymptotic K-space. In fact, with this particular scheme, the jump occurs as η crosses 1/4. Finally, let us turn to the fourth modulus, ω, which corresponds to a relative phase between the two monopoles. It also correponds to a relative phase between the two lumps on the cylinder. Define f ± = f (±s 0 ) ∈ su (2): in other words, the f ± are the directions in the Lie algebra of the gauge field at the two peaks. Then we define ω to be the angle between where γ is a path between −s 0 and s 0 . For large |K| this formulation is path independent (up to winding round the cylinder) as the field strength vanishes between the peaks. This only defines ω up to sign, but the ambiguity may be resolved as before,this time using the quantity ξ = i tr(f + f − φ 0 ). So ω ∈ (−π, π] has period 2π. Since f → 0 as r → ±∞, it is also natural to consider the holonomies at infinity, namely It follows from the equations and boundary conditions that tr U ± = 0, so the U ± individually contain no gauge-invariant information. But the angle between them does: for example, letÛ − be the element of SU(2) obtained by parallel-propagating U − along t = 0 from r = −∞ to r = ∞, and defineω by 2 cosω = tr U +Û * − . This quantityω is related to ω byω − ω = π (modulo integer multiples of 2π). By contrast with ω, the definition ofω is valid throughout M, and not just in the asymptotic region. In particular, for the special solutions (3), which are not in the asymptotic region, we can compute U ± exactly, and this shows that the fields (3) haveω = 0. The asymptotic metric The natural hyperkähler metric on M is believed to have no continuous symmetries, and it is not known explicitly. The asymptotic metric, however, has a fairly simple form. It was derived in [4] by studying the effective Lagrangian of the 2-monopole system, in other words the forces between well separated monopoles. In this section, we see that this asymptotic metric can be calculated directly in terms of the Nahm/Hitchin data. In particular, this shows us how to identify the moduli of the previous section with the "monopole-based" moduli used in [4]. Let us think of a tangent vector in M, at the point corresponding to the solution (φ, as), as a perturbation (δφ, δas) which preserves the equations (1), and also satisfies the for the perturbation to be orthogonal to the gauge orbits at (φ, as). Here δas = 1 2 (δa r +iδa t ) and δa s = 1 2 (δa r − iδa t ) = −(δas) * . The combined equations on (δφ, δas) are equivalent to the pair In addition to these differential equations (6), one also needs boundary conditions δφ → 0, δas → 0 as r → ∞, and the constraint tr(φ δφ) = constant. The norm-squared of the vector V = (δφ, δas) is then defined to be and this gives the metric on M. Note that if V 1 = (δ 1 φ, δ 1 as) is one solution of (6), then so are each of Furthermore, these four vectors are orthogonal with the same norm; in other words, V a , V b = p 2 δ ab for some real constant p. Suppose that V 1 corresponds to the increments (δ 1 K r , δ 1 K i , δ 1 θ, δ 1 ω) in the moduli, where K = K r + iK i are the real and imaginary parts of K; and similarly for the other V a . Define the 4 × 4 matrix Q by Then the coefficients of the metric on M, with respect to the local coordinates (K r , K i , θ, ω), are the entries in the matrix g = (QQ t ) −1 . In the asymptotic region |K| 1, there is a crude but effective approximate solution (φ, as), namely where H = 2 cosh(2πs)−K. There are branch cuts along (r ≥ r 0 , t = t 0 ) and (r ≤ −r 0 , t = −t 0 ), on which a t = 0, and (8) is a solution everywhere except at the two singular points s = ±s 0 . Numerics indicate that it is a good approximation, for large |K|, to the actual smooth solutions, except very close to the singular points. The gauge field consists, in effect, of delta-functions at the singular points, and we complete the approximate description of the field by simply assigning elements f ± of su (2), orthogonal to σ 3 , to these two points. The moduli K r , K i and θ appear explicitly in (8), and ω is the angle between f + and f − . To keep things simple in what follows, let us restrict to the case Re(K) > 0. Since θ and ω are twisted rather than global coordinates, one obtains the complete picture by doing the case Re(K) < 0 as well, and then patching things together in an appropriate way. For V 3 we get δ 3 φ = 0, whence δ 3 K r = δ 3 K i = 0; and δ 3 as = 1 4 εhσ 3 . Thus δ 3 a r = − 1 2 iε Im(h)σ 3 and δ 3 a t = − 1 2 iε Re(h)σ 3 , from which one directly computes . Note that the 'knock-on' effect of one perturbation on another has been ignored, and indeed for large |K| such terms only make relatively small contributions to the metric. The next step is to compute the leading terms in the integrals I and J for |K| 1. These are obtained from the approximation 1/H(r, t) ≈ −1/K for 0 ≤ r < r 0 , 1/H(r, t) ≈ 0 for r > r 0 , which gives Then it is straightforward to calculate the asymptotic metric as described above, via the matrix Q, and we get Now the asymptotic metric ds 2 CK of [4], which was computed by considering the forces between monopoles, is given by where (x, y, z) is the location of one of the monopoles relative to the centre of mass, and ν is a relative phase with period π. The functions U and g are defined by where x + iy = ρ e iπ(χ−χ 0 ) , with χ 0 being some constant. We already know [3,5] that x + iy = C √ −K; and it is straightforward to check that the metrics (10) and (11) agree, up to an overall factor of 8π, if we make the identification θ = z and ω = −2ν. In particular, therefore, 2θ is the z-offset of the monopoles. Geodesic surfaces The geometry of the moduli space M, and in particular its geodesics, correspond to the dynamics of monopole systems in situations where radiative losses are small, and in particular when speeds are small [7,8]. Not knowing the metric explicitly means that we cannot find many geodesics exactly; but some can be obtained as fixed-point sets of discrete symmetries of M. The first step, as in the familiar R 3 case [9], is to identify geodesic surfaces in M; we do this by looking for discrete symmetries of the system (1, 2). The most obvious symmetry is φ → −φ, as → as. In the monopole picture, (12) corresponds to rotation by π about the z-axis. Let S denote the fixed-point set of (12). Since K is preserved, (12) acts only on the other two moduli θ andω (for this section, it is more convenient to useω than ω). From the discussion of signs in Section 2, it is clear that the effect of (12) is θ → −θ andω → −ω. Thus asymptotically, S has four disconnected components, corresponding to θ,ω ∈ {0, π}. In this asymptotic regime, we see a pair of monopoles, located at the points x + iy = ±C √ −K, z = ±θ, and with their phases either aligned or anti-aligned depending onω. The question now is what S looks like globally, not just asymptotically. As described in [6], the solutions belonging to S take a simplified form: there exists a gauge in which where f , g and h are complex-valued functions. The constraint on det φ is f g = C 2 H = C 2 [2 cosh(2πs) − K], and the Hitchin equations become together with 2h = −∂s log f , where ∆ = 4∂ s ∂s is the Laplacian. The boundary condition is |f 2 /H| → C 2 as r → ±∞, and the solutions have the symmetry |f (−r, 1 − t)| = |f (r, t)| for all r, t. The remaining gauge freedom consists of where λ(r, t) is periodic and of unit modulus. Taking account of this gauge freedom, it is easy to see that there are four classes of solution of (13): f could have one zero or none; and lim r→∞ Im log f (r, t) = 2πnt, where n is either 0 or 1. As we shall see below, these four possibilities correspond to the four asymptotic components of S. Let us begin by studying one of these cases in detail, namely where f has no zeros and n = 0. In particular, this means that f has the form where ψ is a complex-valued periodic function. If we write ψ = α + iβ for the real and imaginary parts of ψ, then a t = 1 4 i(α r − β t )σ 1 , and therefore the holonomy at r is The boundary condition says that α(r, t) ∼ ±2πr as r → ±∞, so the holonomies at the two ends are U ± = ±iσ 1 , and henceω = π. To compute θ, note that the symmetry (14) implies α r (0, 1 − t) = −α r (0, t); it follows that U 0 = 1 and so θ = 0. For f of the form (15), the equation (13) becomes which has a unique solution for each value of K ∈ C. So this case gives one of the components S − of S, namely the one corresponding to θ = 0,ω = π; and S − is diffeomorphic to C. With its natural metric (the restriction of the metric on M), the surface S − is a deformed version of the Atiyah-Hitchin cone [9,8], having no continuous symmetries (unlike the Atiyah-Hitchin cone itself, which is rotationally-symmetric). The metric on S − may be calculated numerically, and this is instructive in that it shows the effect of varying the parameter C. The procedure is as follows. Given complex numbers K and δK, with δK small, we solve (16) numerically for K and for K = K + δK, giving real functions α and α respectively. This may be done by minimizing an appropriate functional of α, as described in [6]. We take β = 0, so ψ = α is real-valued: this is just a gauge choice. However, setting β = 0 as well leads to a perturbation (δφ, δas) which does not satisfy the gauge-orthogonality condition (5). So we need to use δψ = α − α + i δβ, where δβ is determined by the requirement that δψ be orthogonal to the gauge orbits at ψ = α. This is just a linear equation for δβ, having a unique solution, and is straightforward to solve numerically. The final step then uses (7) to evaluate δK 2 , and hence gives the metric on S − . This metric has the form ds 2 = Ω(K) |dK| 2 , and we know from (10) that log |K| as |K| → ∞. The upper plots in Figure 1 show Ω(K)/C 2 on |K| < 6, for C = 1 and C = 5 respectively. The corresponding lower plots are rough sketches of the surface, obtained by computing the Gaussian curvature numerically from Ω and then finding an embedded surface in R 3 with that curvature. We see that for C = 1, the surface has approximate rotational symmetry, as one would expect since it should approach the Atiyah-Hitchin cone as C → 0. For larger C, however, the lack of symmetry becomes apparent, with the cone becoming stretched in the K r -direction. We next consider the component of S corresponding to the case where f has no zeros and n = 1. The corresponding geodesic surface S + is isometric to S − : in fact, the isometry is the map τ . The action of τ amounts to gauging by the antiperiodic transformation , and the effect of this on the moduli is K → K, θ → θ + π,ω →ω. So in particular, S + has θ =ω = π. For the two remaining cases, f has a zero, which has to be one of the zeros of H. Then f has either of the two forms where µ = e πs − W e −πs , W = (K + √ and we take the branch of the square root such that |W | > 1. The boundary condition is Re(ψ) → 0 as r → ∞, Re(ψ) → −2 log |W | as r → −∞. These classes (17, 18) both havẽ ω = 0, and they have θ = 0, π respectively. They are interchanged by the map τ . However, each class contains the two special solutions (3), which are the fixed points of τ . So in fact we get a single component S 0 of S, consisting of two copies of the K-plane branched over the points K = ±2. In effect, interchanging the two branches of the square root in (19) interchanges the forms (17) and (18). The single surface S 0 has two asymptotic regions, each of which is cone-like: so the picture may be described as a double-trumpet, by contrast with the Atiyah-Hitchin trumpet of the R 3 case [9]. The metric on S 0 has no continuous symmetries, but has an approximate rotational symmetry (about the axis of the trumpet) for small C. The second expression in (19) is just the usual conformal mapping K = W + W −1 , and this gives us a global coordinate W ∈ C * on S 0 . Given W , we take the field to be determined by (18). If |W | 1 it lies on the sheet θ = π, while if |W | 1 it lies on the sheet θ = 0. The K-plane is cut on the line segment −2 ≤ K ≤ 2, which corresponds to |W | = 1, and crossing this curve goes from one sheet to the other. In the next section, we shall describe geodesics which cross sheets in this way. Finally, we remark on the large-C behaviour of the metric. In [5] it was suggested that when C 1, ie. when the monopoles are large compared to the z-period, the only relevant modulus is K; and the metric on the K-plane was computed by using an approximation to the monopole fields which is valid in this limit. In fact, we can also compute this limiting metric in the Nahm-transformed picture. The crucial observation is that in the large-C regime, the fields are well approximated by the singular solution (8), except at its singularities. In other words, the large-|K| and large-C approximations are the same. It follows that the large-C metric on each component of S is ds 2 = C 2 I|dK| 2 , where I(K) is defined in (9). In Section 3 we only used I(K) for |K| large, but in this context we need it for all K ∈ C. The integral (9) converges for all K except the two special values K = ±2, and its value is plotted in Figure 2, as a function of K ∈ C. This picture should be compared with the plots of Ω/C 2 in Figure 1 for C = 1 and C = 5: the function I(K) appears to be the C → ∞ limit of Ω/C 2 . A more extensive numerical investigation of Ω, for a wider range of C, bears this out. Note also that in this limit, the surface S 0 resembles two copies of Figure 2, branched between the singularities. Geodesics and monopole scattering Our aim in this section is to identify geodesics on S ± and S 0 , and to interpret these in terms of 2-monopole trajectories. One could construct such geodesics numerically, for example by using the numerically-derived metrics on these surfaces; such a construction was implemented in [5] in the large-C limit. But here we will do something more analytic, namely identifying geodesics on S ± and S 0 as fixed-point sets of additional symmetries of the system. An example of this type was presented in [6]; we will here give a fuller discussion, revealing rather more interesting behaviour than was seen before. The first step is to describe the relevant symmetries of the Hitchin equations (1). There are two of them, namely K → −K, φ(r, t) → iφ(r, 1 2 − t) * , a r (r, t) → a r (r, 1 2 − t), a t (r, t) → −a t (r, 1 2 − t). (21) The fixed-point sets of (20) and (21) are geodesic hypersurfaces in M given by K ∈ R and K ∈ iR respectively, and the intersections of these with the surfaces S ± and S 0 are geodesics. For S ± , this leads to a picture which is essentially the same as in the R 3 case: the geodesics pass over the "centre" of the deformed cone, and this corresponds to 90 • planar scattering of two monopoles, via a toroidal 2-monopole solution. The geodesics on the double-trumpet S 0 are more interesting, however, and we shall focus on them in what follows. In terms of the coordinate W ∈ C * on S 0 , the symmetries (20, 21) lead to five complete geodesics, namely the four half-axes in the W -plane and the unit circle |W | = 1. The last of these is a closed geodesic which winds around the waist of the double-trumpet. The two points W = ±1 on it are the two special solutions (3) representing 1-monopole fields taken over two periods, the monopoles being located on the z-axis at z = ±π/2. In fact, the geodesic consists entirely of monopole pairs located at these two points on the z-axis: the monopoles stay in the same position and simply oscillate in shape. This is illustrated in Figure 3 range of W -values, followed by numerical implementation of the inverse Nahm transform. The parameter C was taken to have the value C = 1. Each of the plots in Figure 3 is a contour plot of |Φ| 2 on the plane z = π/2, with the Higgs field Φ having a zero at the centre x = y = 0. The cases W = ±i correspond to K = 0 on each of the two K-sheets, and one then has an additional x ↔ y symmetry which is absent for the other points on this bounded trajectory. The pictures on the plane z = −π/2 are the same. So we have a periodic trajectory representing a string of equally-spaced monopoles, with all their kinetic energy coming from their in-phase shape oscillation. The other four geodesics mentioned above are of scattering type, where two widelyseparated monopoles undergo a head-on collision and then separate again. Let us first describe the W > 0 case, ie. W on the positive real axis. A point with 0 < W 1 corresponds to a pair of monopoles widely-separated on the y-axis, in fact at x = 0 = z, y = ±C/ √ W . At the other end of the geodesic, where W 1, we have monopoles at In other words, the monopoles approach each other along the y-axis, collide, and emerge along the ±y directions but shifted by half a period in z. Using a numerical Nahm transform for a sequence of real W -values near W = 1 reveals what happens to the monopoles as they collide: the results of this are shown in Figure 4. In effect, the scattering takes place in the plane x = 0, and so we use contour plots in this plane. Note that the range of z in the plots has been shifted, using the periodicity, in order to give a clearer representation of the scattering. As W increases from a small positive value, the monopoles come in along the y-axis, with z = 0. By W = 1/2 (the first plot in Figure 4) we see that they have merged at the origin. Then (second plot, with W = 2/3) they begin to separate along the z-axis. At W = 1, they are equidistant: this is a special solution (3). Then they re-merge at z = π on the z-axis, and separate in the ±y-directions. The combination of 90 • scattering in the yz-plane and periodicity in z leads, in this example, to a picture in which monopoles emerge in the same directions as they entered, but shifted by half a period. The plots in Figure 4 are for C = 1; for other values of C, the picture is qualitatively the same, although the details differ -for example, the value of W at which the monopoles merge. Let us consider, next, the geodesic W = ip 2 with p > 0, ie. W on the positive imaginary axis. If p 1, then K ≈ ip 2 and the monopoles are located at x + iy = ±Cpe −iπ/4 , z = π; while if 0 < p 1, then K ≈ −ip −2 and the monopoles are located at x + iy = ±Cp −1 e iπ/4 , z = 0. So here the trajectory is fully 3-dimensional: the monopoles undergo right-angle scattering in the xy-direction, as well as being shifted by half a period in z. As long as C is not too large, any radial line in the W -plane is an approximate geodesic representing head-on scattering of two monopoles, and it is easy to see that we get a picture which interpolates between the two examples above. In fact, the line W = p 2 e iν , with ν fixed, gives a scattering angle of ν, in addition to the z-shift. For large C, however, one gets rather different trajectories: see, for example, figure 6 of [5]. Concluding remarks We have seen that the periodic monopole system admits dynamical behaviour not seen in the non-periodic R 3 case, in particular head-on collision of two monopoles resulting in scattering through any angle, accompanied by a half-period shift. This is a consequence both of the periodicity, and of the absence of rotational symmetry about the periodic axis. It would be worth studying other geodesics on the double-trumpet geodesic surface, not just those representing head-on collisions, as this might reveal further novel behaviour. We have shown that the asymptotic metric on the 2-monopole moduli space may be computed directly via a simple approximation of the relevant Nahm/Hitchin data; and this metric agrees, as expected, with that derived by considering the effective 2-monopole Lagrangian. This asymptotic metric is relatively simple, having continuous symmetries; in particular, one can identify geodesic surfaces which are different from those described in this paper, and which do not involve θ and ω remaining constant. It would be interesting to investigate the global structure of these, and to look for geodesics (trajectories) where θ and/or ω change. The methods used here should extend to the case of higher charge monopoles. In the R 3 case, it is particularly useful to consider multi-monopoles invariant under discrete subgroups of the rotation group [10,8]. Because of the lack of rotational symmetry in the periodic case, it seems unlikely that the full scope of this technique could be applied. But certain discrete symmetries such as cyclic symmetry should remain relevant, and it would be worth making a systematic study of the symmetries of the SU(n) Hitchin system, corresponding to centred periodic n-monopole solutions, for higher values of n. Some preliminary results along these lines have been obtained, and further work is in progress.
8,108.8
2013-09-26T00:00:00.000
[ "Mathematics" ]
Numerical Modeled Static Stress-Deformed State of Parallel Pipes in the Deformable Environment The paper considers the static pressure of the environment on the parallel pipe. The environment is elastic and homogeneous bodies. To determine the ambient pressure, the finite element method is used. An algorithm was developed and a computer program was compiled. Based on the compiled program, numerical results are obtained. The numerical results obtained for two to five parallel pipes are compared with known theoretical and experimental results. INTRODUCTION At present, and in the coming decades, ensuring the operational reliability of the linear part of multi-thread underground pipelines is and will continue to be a complex scientific and engineering problem. In the modern design, various software packages of automated design are widely used, allowing to carry out the engineering analysis of computer models without resorting to real experiments. The most common and efficient calculation method is the finite element method (FEM). When determining the pressure of the soil on the pipes, it is necessary to take into account such factors as: the number of threads, the topography of the embankment, the conditions of supporting the pipes and other factors encountered in design practice. Accounting for other factors in analytical solutions is either extremely complex; or in general is impossible because of the difficulties that arise in this case of a mathematical nature. Various factors encountered in project practice can be accounted for using numerical methods. Recently, when solving various kinds of applied problems, the finite element method (FEM) is widely used. A number of works are known in which domestic [1,2,3,4] and foreign authors [5,6] successfully apply FEM to determine the soil pressure on a single laid extended pipe, under various conditions of its support, taking into account the heterogeneity of the soil composing the body mounds of constant height (flat deformation). STATEMENT OF THE PROBLEM BY THE FINITE ELEMENT METHOD The most common method for calculating complex structures is the finite element method (FEM). Its peculiarity consists in the fact that a design representing a continuous medium is replaced by its analog, composed both of cubes and of a finite number of element blocks, the behavior of each of which can be determined in advance. The interaction of the elements makes it possible to present an overall picture of the deformation of the system. In Figure 1. The cylindrical bodies in the deformed space are depicted. The stiffness characteristics of each of these elements is determined in advance. The stress-strain state of such a complex structure can be determined with the help of FEM. The advantage of the method in its universality: the possibility of using elements of different types, the arbitrariness of the region under consideration, simple methods for constructing elements of high accuracy. In the variant of the method considered below, the method of displacements, when joining elements, the requirement of satisfying natural boundary conditions is not necessary. This most famous version of the FEM uses the formulation of the principle of possible displacements: δА = δА 1 + δА 2 = 0 In matrix form for a three-dimensional body, it can be represented as follows: The same state can have the form: Vectors of volume forces, surface forces and mixing of points of the body are as follows: The equilibrium conditions (1) do not depend on which material properties and are valid for both linear and nonlinear systems. For a linearly elastic body having initial deformations, the physical relationships take the form: We use the relations between deformations and displacements, then we get: The matrix [B], which connects deformations with nodal displacements, is important in the further calculation ( Figure 1). The stress vector is defined by equations (2), and taking into account (5) it will look like: Let us consider separately the left and right sides of the equilibrium condition (1). After substituting the deformation vector into the left side of the equation (1), it will be expressed in terms of nodal displacements and some integral indicated by the symbol [К]: is a matrix containing the basic information on the behavior of a small region of a deformed system. It is called the element stiffness matrix and is the main characteristic of the system in the FEM. On the right-hand side of equation (1), the integrals over the volume and over the surface can be represented as follows: METHODOLOGY FOR CALCULATING THE STATIC PRESSURE OF SOIL ON PIPES As a computational model, by analogy with [7,8], a weighty elastic medium ( Figure 1) is used that contains holes and other inclusions supported by circular cylinders and other inclusions (foundation, heterogeneity of the ground, etc.). For pipes according to [9], we assume that the cylinder is welded to the medium (there is no slippage of the soil along the surface of the pipe). On the external contour of the medium, the boundary conditions have the following form [9] (Figure 1): • on vertical boundaries, shear stresses and horizontal displacements are either zero or these boundaries are free; • on the lower horizontal boundary adjacent to the base of the embankment there are no vertical and horizontal movements; • the upper surface is either free from external influences, or loaded with a surface load. The dimensions of the chosen area for the calculation should be optimal, because this affects the time spent on the calculation of the FEC and, consequently, the efficiency of the program based on it. If the soil is an isotropic material or the system of the pipe-soil in question has an axis of symmetry (both in geometry and in material), it is possible to reduce the design area by taking only a symmetrical half of it. The breakdown of the chosen calculation area is carried out in the form of tetrahedral finite elements. In this case, the center mesh should thicken as it approaches the pipes; it is around the pipes that the greatest concentra-tion of soil pressure occurs. To estimate the convergence of the resulting approximate solution corresponding to this breakdown, it is necessary to make a finer division of the computational domain into an exact solution. Then a comparison of the solutions corresponding to both breakdowns should be made. If they differ from each other by an amount greater than the predetermined accuracy of the computations, it is necessary to make an even smaller third partition of the domain and the corresponding solution compare with the solution for the second breakdown, etc. It should be noted that with a dense arrangement of pipes in the places of their contact, "singular points" arise, in a small neighborhood of which it is impossible to achieve the necessary accuracy of calculations for any smallest breakdown (elasticity theory is inapplicable at these points). The same points arise in the places where the pipes rest on a flat base. When determining the soil pressure on rigid round pipes, such as ferroconcrete pipes in particular [10,11], this difficulty is easily overcome by the following method: with the help of FEM, the vertical and horizontal soil pressure at all points of the pipe, except for the special one, is determined; a concentrated force is applied at a particular point, directed vertically at the point of support of the pipes or horizontally at their point of contact, equal in magnitude to the area of the diagram of the vertical and horizontal pressure of the soil acting on the pipes, respectively. We distribute the proper weight of the soil of the embankment according to [3,4] along the breaking points as follows: at each node of this triangular finite element, we apply a downward concentrated force equal in weight to the part of the soil bounded by this element divided by the number of nodes. The surface load is distributed along the nodes of the upper boundary in the form of concentrated forces. If it is necessary to obtain the influence matrices (Green's function), then it is necessary to calculate the unit concentrated force, applying it consistently at each node of the upper boundary. Modeling of materials of soil, pipes and other inclusions is carried out with the help of the corresponding values of elastic constants (E, ν) and specific gravity. This makes it possible to take into account the conditions of supporting the pipes, the heterogeneity and verbosity of the soil of the embankment and the base, and the multitude of laying. PARAMETRIC ANALYSIS OF STRESS-STRAIN STATE OF REINFORCED CONCRETE UNDERGROUND ROUND TUBES Using the program MSK-1, the influence of the following factors on the pressure distribution of the soil of the embankment around the round reinforced concrete underground pipes was investigated: the number of threads, the distance between the pipes, the location of the pipe (extreme, middle), the Poisson coefficient of the embankment soil, the type of pipe support, the change of the relief of the embankment along the pipes, length of pipes. The influence of the number of threads. In Fig. 2, 3, 4 shows the dependence of the maxi-mum soil pressure on the pipes on the number of threads and the Poisson's ratio of the soil. At the same time, the support was firmly supported on a flat solid base. From Figure 2, 3, 4 it follows that the value of σ max for pipes laid in several strings is 10-30% less than the corresponding value for a single laid pipe, which is determined by SNiP2.05.03.-84. In this case, the maximum soil pressure depends on location of the pipe, i. on an average pipe it is 15-25% less than on an extreme one.). The fact that the outer tube is unloaded is less due to the fact that only one nearby middle pipe exerts a significant influence on its unloading, and the other is the outer tube, first, far from it (1.0D), and secondly, between the two outer tubes lies the middle tube, which is a kind of "screen", reducing the mutual unloading effect of the two outer tubes. Therefore, in particular, the maximum pressure of the soil on the edge pipe is practically independent of the number of threads (in Figure 4, the value of σ max for pipes of two yarn stacking and the outer tubes of multi-threading is shown in one curve. two pipes are located on both sides of it, and not one, as in the case of an extreme tube. From Figures 3 and 4 it follows that for a number of threads greater than three, the value of σ max on the middle tubes is practically independent of the number of threads. with the concept of a "period" of pipes and explained in [12]. This is explained by the fact that the greater the value of the coefficient ν, the greater the distribution capacity of the ground environment. Consequently, we can assume that for a number of threads four or more, the value of σ max on the average is practically independent of the coefficient ν. An explanation of this phenomenon is given in [12,13]. As can be seen from Figure 3, as the coefficient ν increases, the difference in the values of σ max for pipelines with different number of threads decreases. Thus, the maximum ground pressure on the pipes of multi-thread stacking is less than the single one. At the same time, the maximum ground pressure on the outer tubes is greater than the average pressure. The pressure of σ max on the edge pipe is practically independent of the number of threads. The maximum soil pressure decreases with increasing number of threads and with a number of threads greater than three, this decrease becomes insignificant. Hence it follows that the difference between the maximum soil pressure on the outer and middle pipes of multiline stacking (n ≥ 3) is practically independent of the number of threads and for densely laid pipes is 15-20%. In addition, with an increase in the Poisson's ratio of soil, the value of σ max on the edge pipe is reduced. With the number of threads n > 4, the value of σmax on the middle tube is practically not from the coefficient ν. Effect of the distance between the pipes. The results of the analysis of the maximum ground pressure on two and three-stranded laying pipes (a) between them are shown in Figure 5. The graphs in Figure 5 show that as the distance between the pipes increases, the value of σ max increases. At 0 ≤ d/D ≤ 0.5, the increase in σ max is insignificant (3%), and at 0.5 < d/D ≤ 2.0 a significant increase in the maximum ground pressure is observed, decaying at d/D > 2. At d/D ≥ 3, the maximum ground pressure per pipe laid in several strands corresponds to the maximum pressure per single laid pipe and coincides with the value determined by SNiP 2.05.03-84 (90). Thus, the mutual influence of multifilament stacking pipes takes place at a distance between us d < 3D and leads to a decrease in the maximum ground pressure on them compared to a single stacked pipe. The pressure of σ max on the middle and outer tube reaches a minimum value when the d = 0 pipes are laid closely and are respectively 0.74 and 0.85 of the maximum pressure on a single stacked pipe. On the basis of the obtained dependences of the magnitude of the distance between the pipes, the following formulas are derived for determining the soil pressure coefficients for pipes of multiline stacking: for 0 < d/D ≤ 2.5. where: КCМ and КKМ -are the coefficients that take into account the reduction of the maximum soil pressure, respectively, on the edge and middle multiline stack as compared to a single stacked pipe. Analysis of the influence of the distance between the pipes on the horizontal pressure of the soil (σ s ) at the horizontal diameter was carried out by double-laying pipes, the theoretical and experimental studies carried out have shown that the quantity σ s does not depend on the number of threads. In this case, it is necessary to distinguish the horizontal pressure of the soil on the pipe from the side of the adjacent pipe (σ s ) and from the opposite (pipe-free side) (σ s ). From Figure 6 that the horizontal pressure σ r for d/D > 3 is a constant value and coincides with the corresponding σ = 0. At 0 ≤ d/D ≤ 2, it increases intensively and at d/D = 0 tends to infinity. This is due to the appearance of a "singular" point, in which the theory of elasticity is not applicable. The sharp increase in d with decreasing distance d is explained by the convergence of the two stress concentrates, which are the pipes. The influence of the Poisson's ratio on the horizontal pressure of the soil is shown in Figure 7. It follows from the graphs that the values of σ s and σ r increase with increasing coefficient ν, and the horizontal pressure on the side of the adjacent tube increases more intensively, i.e increases the coefficient σ s by a factor 2.8, and σ r in 2.3 times. Stress state of the soil around the pipes For a more complete analysis of the soil pressure on the pipes of multi-stranding, the diagrams of radial (σ r ) and tangential (σ s ) ground pressures are considered for various parameters of laying multi-thread pipes on a flat solid base. In Figure 8-10 shows the diagrams (σ r ) for pipes laid in one and two strands at a distance of d of 0, 0.5D, 1.0D, 2.0D, 3.0D. All the diagrams of the same sign correspond to the compression pressure. It is seen from the diagrams that for d < 3.0D they are asymmetric, and for d = 3, D are symmetric. The presence of the asymmetry of the diagram a is due to the mutual influence of the multiline stacking tubes on the pressure distribution of the soil around each pipe. With an increase in distance 6. this effect gradually weakens and does not affect when d ≤ 3.0D, i.е. in this case the tube of multi-strand folding of the diagram σ s is practically symmetrical. Therefore, in the design of pipes, the deviation of the ordinate σ max from the vertical diameter can be ignored for d > 2.0D. The ordinate of the maximum radial pressure deviates from the pipe lock in the opposite direction to the location of the adjacent pipe. The analysis shows that this effect is manifested especially in multiline stacking at d < 2.0D. This is due to the fact that on one side of the end pipe is located next to it a pipe that unloads the first. The opposite side is free and there is no unloading effect from this side the outer tube receives. Due to this "unbalanced unloading" of the outer tube, the value of σ max is shifted. In Figure 11 is a graph of the dependence of the deviation of the ordinate σ max (angle β) on the parameter d. This biconvex curve whose ordinates decrease with increasing d. At 0 < d < 0.5D and d > 2.0D, the angle β varies insignificantly (actually from 15° to 14°30' and from 2° to 0°). The main change in angle β occurs at 0.5 < d < 2, D. The maximal value of the ordinate σ max reaches at d = 0 (tubes laid close), minimal (00) at d < 3.0D, when each pipe works as a single stacked one. The analysis in Figures 8-10 shows that the diagrams of σ max on the half of the pipe opposite the location of the adjacent pipe (in the Figure the left half of the diagrams to the ordinate σ max ) in all cases the effect of two pipes at d/D < 3.0 on the pressure distribution of the soil around of them is local and extends to a section up to 15°< and < 180° (in the figure, the right half of the diagrams) In Figures 8, 12, 13 shows the diagrams of the radial pressure of the soil σ r on a single laid pipe and on pipes laid closely in two, three, four and five threads. In all cases, the pressure diagrams on the outer tubes are asymmetric, and the average tubes (for n > 3) are explained by symmetrical unloading by two adjacent pipes. The upper part of the diagrams of the multiline stacking is slightly flattened in comparison with the diagram for a single laid pipe (Figure 8). This oblateness is greater for medium pipes than for extreme tubes, which indicates a more uniform distribution of pressures, their greater load. It should also be noted that the diagrams σ s for the outer tubes of multiline stacking practically do not differ from the σ s diagram for double-laying pipes. Thus, when constructing the diagram σ s for the end pipe of multiline stacking, we can use the results of calculation of double-laying pipes. It follows from Figure 13, b (n = 5), the diagrams for the central and neighboring middle tubes practically coincide. The diagrams for the medium pipes for n = 4 (Figure 13, a) and for n = 5 are also small from each other. Thus, when determining the pressure of the soil on the pipes of pipes of four-stranding. The concept of "pipe period" was also introduced there. It means a minimum number of pipes, in which the addition of another pipe from the edge practically has no effect on the stress-strain state of the soil around the central pipe. Consequently, the value of the period for the sleeves is four. Analogously, the value of the period (T) of the pipes laid at some distance from each other was analyzed. The results of this analysis are presented in Table 1. From Table 1 it can be seen that the value of T decreases with increasing distance between the pipes. This is due to a decrease in the mutual influence of the pipes as the distance between them increases. In order to present the general picture of the distribution of the radial pressure of the embankment on the pipes in Figure 13 shows the lines of equal radial pressures for pipes laid in one, two and three threads respectively. Symmetrical arrangement lrr. is typical for a single laid pipe ( Figure 13). In addition, lr.d. are also symmetric for central tubes for odd multicultural packing in the vicinity of 1.5D from the center of the pipe in both directions (for example, for n = 3 Figure 12). For the outermost tubes, asymmetry and displacement of the vertices are observed. in the opposite direction from the adjacent pipes ( Figure 12). In addition, σ r at n = 2 and n = 3 have less ordinates and are more flattened than for a single laid pipe (n = 1). This flattening indicates a more uniform ground pressure on multi-threaded pipes compared to a single-laid σ r for double-laying pipes and the three-threaded outer tubes are almost identical. Figure 15 shows the diagrams of m for a single pipe and double-laying pipes with a distance in the light d = 0 ... 3. It is characteristic that on the half of the pipe free from the influence of the adjacent one (in the figure, the left half) τ does not depend on the parameter d and the diagram τ is the same as for the single pipe. The ordinates of the right half of the diagram τ for 0 < d < 3.0D are smaller than the left one due to the unloading effect of the neighboring pipe. For d > 3.0D, the effect is no longer affected and the diagram τ is similar to the diagram for a single pipe. The maximum of the tangential pressures for any half of the diagram is achieved at θ = 60° from the vertical axis in both directions. The largest value of τ max in all cases for the outer tubes occurs on the left-hand half of the tube (free from the influence of the neighboring pipe) and exceeds the small tangent pressures on the right half of the tube by a factor of 2.2 d = 0; by 1.55 times at d = 0.5D and by 1.1 times at d = 1.0D. Influence of the type of support of pipes In Figure 11 are graphs of the dependence of the value of σ max on the support of pipes and the Pausson coefficient ν. In the calculation, the following conditions for supporting the pipes were used at the suggestion of the hydropower plant: • base with angle of capture 2α 0 = 90°; base with an angle of coverage 2α 0 =120°; • a foundation with an angle of coverage 2α 0 = 120°, while the height of the foundation from the ground to the bottom point of the pipe was assumed to be 0.2D. In addition, support was considered on a flat base. As can be seen from Figure 16, the largest value of σ max corresponds to support on the foundation, and the smallest on the base with a coverage angle of 2α 0 = 120°. For example, at ν = 0.1, the value of σ max for pipes supporting the foundation is larger than the corresponding values for pipes that support a flat solid base by 3%. solid base with an angle of coverage 2α 0 = 90° by 6%, the base with an angle of coverage 2α 0 = 120° by 8%. This phenomenon is explained by the fact that the larger the I pipe protrudes above the surface of the base (together with the foundation), the more the pressure of the soil acting on this pipe is concentrated. We also note that, regardless of the type of support, the quantity σ max decreases with increasing coefficient ν. With an increase in the coefficient ν by a factor of 4, σ max decreases depending on the type of support of the pipe in 1.17 -1.21 times. Given the slight change in σ max , depending on the method of support (2-8%) in designing pipes based on a solid base, this factor can be ignored. In Table 2 shows the dependence of the coefficient of maximum vertical soil pressure (К max = σ zzmax /γ hmax ; h max -maximum embankment height) on reinforced concrete pipes from the number of threads and the profile of the embankment. The pipes are supported by a reinforced concrete foundation with an angle of coverage up to 120°. The wall thickness of the pipes is 0.1D The distance in the light between the pipes is 0.5D. Pipes are made of concrete of class B 25; ν = 0.15; E = 30000MPa; soil of the mound with elastic constants ν = 0.3; E = 30MPa. In the first row of Table 2 shows the results for long pipes laid over a bulk of constant height (flat deformation). In the second row of Table. 6.2 shows the results for long pipes laid under the mound with the applied length of pipes laid under the mound with a variable longitudinal profile in the form of a triangle with an angle slope β = up to 30°. From Table 2 it follows that the coefficient Kmax decreases with the number of threads. And this fact is true both for a flat problem (the first line) and for a volume one (second line). For example, the value of K max for an average three-threaded pipe (n = 3) is 35% smaller than the corresponding value for a single pipe (n = 1) in the case of a flat problem ((β = 0°), and by 37%, in the case of spatial problem (30°) . To analyze the influence of the longitudinal relief of the embankment on the soil pressure on the pipes and compare the results of the planar and spatial problems, the maximum height of the embankment (β = 30°) was assumed equal to the height of the mound of constant height (β = 0°). From Table 2 that the values in the first line differ from the corresponding values of the second row by an average of 30%. From this it follows that taking into account the variable length of the embankment height along the length of the pipe reduces the estimated ground pressure as compared with the calculation performed on the flatdeformed scheme. This effect was obtained for the first time. In this case, as follows from Table 1 this effect is slightly less pronounced for a single pipe (29%) and slightly stronger for two-thread (32%) and three-thread (30%) stacking pipes. Influence of length of pipes. In Table 3 shows the dependence of the coefficient K max for reinforced concrete pipes of two-strand packing from their length l (β° = 0). From Table 3 it follows that with a decrease in pipe length the coefficient Kmax kills. In this case, when the length l = 10.0D, its effect on Kmax is insignificant. Thus, the length l = 10.0D is that boundary of applicability of the plane theory of elasticity (plane deformation) for extended pipelines at a constant height of the embankment. In the work [1] the concept of the "core", which in our case is equal to 10.0 D , is derived, and is the boundary between the "short" and "length" pipes, i.e. at l = 10.0 D, the plane-deformed scheme gives an overestimate of the K max coefficient even at a constant height of the embankment. This overestimate is 38% at l = 6.00 and 55% at l = 4.00. Thus, taking into account the length of the pipes reduces the design ground pressure in comparison with the calculation using a flat-deformed scheme, if l = 10.0D.
6,489
2018-07-05T00:00:00.000
[ "Engineering", "Physics" ]
The Rise-Contact involution on Tamari intervals We describe an involution on Tamari intervals and m-Tamari intervals. This involution switches two sets of statistics known as the"rises"and the"contacts"and so proves an open conjecture from Pr\'eville-Ratelle on intervals of the m-Tamari lattice. Introduction The Tamari lattice [Tam62,HT72] is a well known lattice on Catalan objects, most frequently described on binary trees, Dyck paths, and triangulations of a polygon. Among its many interesting combinatorial properties, we find the study of its intervals. Indeed, it was shown by Chapoton [Cha07] that the number of intervals of the Tamari lattice on objects of size n is given by (1.1) 2 n(n + 1) This is a surprising result. Indeed, it is not common that we find a closed formula counting intervals in a lattice. For example, there is no such formula to count the intervals of the weak order on permutations. Even more surprising is that this formula also counts the number of simple rooted triangular maps which led Bernardi and Bonichon to describe a bijection between Tamari intervals and said maps [BB09]. This is a strong indication that Tamari intervals have deep and interesting combinatorial properties. One generalization of the Tamari lattice is to describe it on m-Catalan objects. This was done by Bergeron and Préville-Ratelle [BPR12]. Again, they conjectured that the number of intervals could be counted by a closed formula which was later proved in [BMFPR11]: (1.2) m + 1 n(mn + 1) (m + 1) 2 n + m n − 1 . In this case, the connection to maps is still an open question. In their paper, the authors of [BMFPR11] noticed an equi-distribution on Tamari intervals between two statistics related to contacts and rises of the involved Dyck paths. At this stage, the equi-distribution could be seen directly on the generating function of the intervals but there was no combinatorial explanation. In his thesis [PR12], Préville-Ratelle developed the subject and left some open problems and conjectures. The one related to the contacts and rises of Tamari intervals is Conjecture 17 and this the one we propose to prove in this paper. It describes a equi-distribution not only between two statistics (as in [BMFPR11]) but between two sets of statistics. Basically, in [BMFPR11], only the initial rise of a Dyck path was considered whereas in Conjecture 17, Préville-Ratelle considers all positive rises of the Dyck path. Besides, a third statistic is described, the distance which also appears in many other open conjectures and problems of Préville-Ratelle 's thesis: it is related to trivariate diagonal harmonics which is the original motivation of the m-Tamari lattice. According to Préville-Ratelle, Conjecture 17 can be proved both combinatorially 1 and through the generating function when m = 1. But until now, there was no proof of this result when m > 1. To prove this conjecture, we use a combinatorial object that we introduced in a previous paper on Tamari intervals [CP15]: the intervalposets. They are posets on integers, satisfying some simple local rules, and are in bijections with the Tamari intervals. Besides, their structure includes two planar forests (from the two bounds of the Tamari interval) which are very similar to the Schnyder woods of the triangular planar maps. Another quality of interval-posets is that m-Tamari intervals are also in bijection with a sub family of interval-posets which was the key to prove the result when m > 1. Section 2 of this paper gives a proper definition of Tamari intervalposet and re-explore the link with the Tamari lattice in the context of our problem. In Section 3, we describe the rise, contact, and distance statistics and their relations to interval-posets statistics. This allows us to state Theorem 3.4 which expresses our version of Conjecture 17 in the case m = 1. Section 4 is dedicated to the proof of Theorem 3.4 through an involution on interval-posets described in Theorem 4.22. However, the main results of our paper lies in our last section, Section 5, where we are able to generalize the involution to the m > 1 case. Theorem 5.5 is a direct reformulation of Conjecture 17 from [PR12]. It is a consequence of Theorem 5.18 which describes an involution on intervals of the m-Tamari lattice. Remark 1.1. A previous version of this involution was described in a extended abstract [CCP14]. This was only for the m = 1 case and did not include the whole set of statistics. Also, in this original description, the fact that it was an involution could be proved but was not clear. We leave it to the curious reader to see that the bijection described in [CCP14] is indeed the same as the one we are presenting in details now. Remark 1.2. This paper comes with a complement Sage-Jupyter notebook [Pon] available on github and binder. This notebook contains Sage code for all computations and algorithms described in the paper. The binder system allows the reader to run and edit the notebook on line. 2. Tamari Interval-posets 2.1. Definition. Let us first introduce some notations that we will need further on. In the following, if P is a poset, then we denote by ⊳ P , P , ⊲ P and P the smaller, smaller-or-equal, greater and greateror-equal, respectively, relations of the poset P . When the poset P can be uniquely inferred from the context, we will sometimes leave out the subscript "P ". We write (2.1) rel(P ) = {(x, y) ∈ P, x ⊳ y} for the set of relations of P . A relation (x, y) is said to be a cover relation if there is no z in P such that x ⊳ z ⊳ y. The Hasse diagram of a poset P is the directed graph formed by the cover relations of the poset. A poset is traditionally represented by its Hasse diagram. We say that we add a relation (i, j) to a poset P when we add (i, j) to rel(P ) along with all relations obtained by transitivity (this requires that neither i ⊳ P j nor j ⊳ P i before the addition). Basically, this means we add an edge to the Hasse Diagram. The new poset P is then an extension of the original poset. We now give a first possible definition of interval-posets. Definition 2.1. A Tamari interval-poset (simply referred as intervalposet in this paper) is a poset P on {1, 2, ..., n} for some n ∈ N, such that all triplets a < b < c in P satisfies the following property which we call the Tamari axiom: Figure 1 shows an example and a counter-example of interval-posets. The first poset is indeed an interval-poset. The Tamari axiom has to be checked on every a < b < c such that there is a relation between a and c: we check the axiom on 1 < 2 < 3 and 3 < 4 < 5 and it is satisfied. The second poset of Figure 1 is not an interval poset: it contains 1 ⊳ 3 but not 2 ⊳ 3 so the Tamari axiom is not satisfied for 1 < 2 < 3. Definition 2.2. Let P be an interval-poset and a, b ∈ P such that a < b. Then • if a ⊳ b, then (a, b) is said to be an increasing relation of P . • if b ⊳ a, then (b, a) is said to be a decreasing relation of P . As an example, the increasing relations of the interval-poset of Figure 1 are (1, 3) and (2, 3) and the decreasing relations are (2, 1), (4, 3), and (5, 3). Clearly a relation x ⊳ y is always either increasing or decreasing and so one can split the relations of P into two non-intersecting sets. Definition 2.3. Let P be an interval-poset. Then, the final forest of P , denoted by F ≥ (P ), is the poset formed by the decreasing relations of P , i.e., b ⊳ F ≥ (P ) a if and only if (b, a) is a decreasing relation of P . Similarly, the initial forest of P , denoted by F ≤ (P ), is the poset formed by the increasing relations of P . By Definition 2.1 it is immediate that the final and initial forests of an interval-poset are also interval-posets. By extension, we say that an interval-poset containing only decreasing (resp. increasing) relations is a final forest (resp. initial forest). The designation forest comes from the result proved in [CP15] that an interval-poset containing only increasing (resp. decreasing) relations has indeed the structure of a planar forest, i.e., every vertex in the Hasse diagram has at most one outgoing edge. The increasing and decreasing relations of an interval-poset play a significant role in the structure and properties of the object. We thus follow the convention described in [CP15] to draw interval-posets which differs from the usual representation of posets through their Hasse diagram. Indeed, each interval-poset is represented with an overlay of the Hasse Diagrams of both its initial and final forests. By convention, an increasing relation b ⊳ c with b < c is represented in blue with c on the right of b. A decreasing relation b ⊳ a with a < b is represented in red with a above b. In general a relation (either increasing or decreasing) between two vertices x ⊳ y is always represented such that y is on a righter and upper position compared to x. Thus, the color code, even though practical, is not essential to read the figures. Figure 2 shows the final and initial forests of the interval-poset of Figure 1. A more comprehensive example is shown in Figure 3. Following our conventions, you can read off, for example, that 3 ⊳ 4 ⊳ 5 and that 9 ⊳ 8 ⊳ 5. Hasse diagram of P F ≥ (P ) F ≤ (P ) P drawn as interval-poset We also define some vocabulary on the vertices of the interval-posets related to the initial and final forests. Definition 2.4. Let P be an interval-poset. Then • a vertex b is said to be a decreasing root of P if there is no a < b with a decreasing relation b ⊳ a; • a vertex b is said to be an increasing root of P if there is no c > b with an increasing relation b ⊳ c; • a decreasing-cover (resp. increasing-cover) relation is a cover relation of the final (resp. initial) forest of P ; • the decreasing children of a vertex b are all elements c > b such that c ⊳ b is a decreasing-cover relation; • the increasing children of a vertex b are all elements a < b such that a ⊳ b is a increasing-cover relation. We also need to refine the notion of extension related to increasing and decreasing relations. In other words, J is an extension of I if it is obtained by adding relations to I, it is a decreasing-extension if it is obtained by adding only decreasing relations and it is a increasing-extension if it is obtained by adding only increasing relations. Remark 2.6. If you add a decreasing relation (b, a) to an intervalposet I, all extra relations that are obtained by transitivity are also decreasing. Indeed, suppose that J is obtained from I by adding the relation b ⊳ a with a < b (in particular neither (a, b) nor (b, a) is a relation of I). And suppose that the relation i ⊳ J j with i < j is added by transitivity which means i ⋪ I j, i I b and a I j. If i < a, the Tamari axiom on (i, a, b) implies a ⊳ I b which contradicts our initial statement. So we have a < i < j and a ⊳ I j, the Tamari axiom on (a, i, j) implies i ⊳ I j and again contradicts our statement. Note on the other hand that nothing guarantees that the obtained poset is still an interval-poset. Similarly, if you add an increasing relation (a, b) to an interval-poset, you obtain an increasing-extension. 2.2. The Tamari lattice. It was shown in [CP15] that Tamari intervalposets are in bijection with intervals of the Tamari lattice. The main purpose of this paper is to prove a conjecture of Préville-Ratelle [PR12] on Tamari intervals. To do so, we first give a detailed description of the relations between interval-posets and the realizations of the Tamari lattice in terms of trees and Dyck paths. Let us start with some reminder on the Tamari lattice. Definition 2.7. A binary tree is recursively defined by being either • the empty tree, denoted by ∅, • a pair of binary trees, respectively called left and right subtrees, grafted on a node. If L and R are two binary trees, we denote by •(L, R) the binary tree obtained from L and R grafted on a node. What we call a binary tree is often called a planar binary tree in the literature (as the order on the subtrees is important). Note that in our representation of binary trees, we never draw the empty subtrees. The size of a binary tree is defined recursively: the size of the empty tree is 0, and the size of a tree •(L, R) is the sum of the sizes of L and R plus 1. It is also the number of nodes. For example, the following tree has size 3, it is given by the recursive grafting •(•(∅, •(∅, ∅)), ∅). It is well known that the unlabeled binary trees of size n are counted by the n th Catalan number (2.2) 1 n + 1 2n n . Definition 2.8 (Standard binary search tree labeling). Let T be a binary tree of size n. The binary search tree labeling of T is the unique labeling of T with labels 1, . . . , n such that for a node labeled k, all nodes on the left subtree of k have labels smaller than k and all nodes on the right subtree of k have labels greater than k. An example is given in Figure 4. In other words, the binary search tree labeling of T is an in-order recursive traversal of T : left, root, right. For the rest of the paper, we identify binary trees with their corresponding binary search tree 1 2 3 4 5 Figure 4. A binary search tree labeling labeling. In particular, we write v 1 , . . . , v n the nodes of T : the index of the node corresponds to its label in the binary search tree labeling. To define the Tamari lattice, we need the following operation on binary trees. Definition 2.9. Let v y be a node of T with a non-empty left subtree of root v x . The right rotation of T on v y is a local rewriting which follows Figure 5 Figure 5. Right rotation on a binary tree. It is easy to check that the right rotation preserves the binary search tree labeling. It is the cover relation of the Tamari lattice [Tam62, HT72]: a binary tree T is said to be bigger in the Tamari lattice than a binary tree T ′ if it can be obtained from T ′ through a sequence of right rotations. The lattices for the sizes 3 and 4 are given in Figure 6. Dyck paths are another common set of objects used to define the Tamari lattice. First, we recall their definition. Definition 2.10. A Dyck path of size n is a lattice path from the origin (0, 0) to the point (2n, 0) made from a sequence of up-steps (steps of the form (x, y) → (x + 1, y + 1)) and down-steps (steps of the form (x, y) → (x + 1, y − 1)) such that the path stays above the line y = 0. A Dyck path can also be considered as a binary word by replacing up-steps by the letter 1 and down-steps by 0. We call a Dyck path primitive if it only touches the line y = 0 on its end points. As widely known, Dyck paths are also counted by the Catalan numbers. There are many ways to define a bijection between Dyck paths and binary trees. The one we use here is the only one which is consistent with the usual definition of the Tamari order on Dyck paths. Definition 2.11. We define the tree map from the set of all Dyck paths to the set of binary trees recursively. Let D be a Dyck path. • If D is empty, then tree(D) to be the empty binary tree. • If D is of size n > 0, then the binary word of D can be written uniquely as D 1 1D 2 0 where D 1 and D 2 are Dyck paths of size smaller than n (in particular, they can be empty paths). Then tree(D) is the tree •(tree(D 1 ), tree(D 2 )). Note that the path defined by 1D 2 0 is primitive; it is the only nonempty right factor of the binary word of D which is a primitive Dyck path. Similarly, the subpath D 1 corresponds to the left factor of D up to the last touching point before the end. Consequently, if D is primitive, then D = 1D 2 0, while D 1 is empty and thus tree(D) is a binary tree whose left subtree is empty. If both D 1 and D 2 are empty, then D = 10, the only Dyck path of size 1, and tree(D) is the binary tree formed by a single node. The tree map is a bijection and preserves the size as it is illustrated in Figure 7. Figure 7. Bijection between Dyck paths and binary trees. ←→ Following this bijection, one can check that the right rotation on binary trees corresponds to the following operation on Dyck paths. By extension, we then say that a Dyck path D is bigger than a Dyck path D ′ in the Tamari lattice if it can be obtained from D ′ through a series of right rotations. The Tamari lattices of sizes 3 and 4 in terms of Dyck paths are given in Figure 9. An example of the construction is given in Figure 10. As explained in [CP15], both the initial and the final forest constructions give bijections between binary trees and planar forests, i.e., forests of trees where the order on the trees is fixed as well as the orders of the subtrees of each node. Indeed, first notice that the labeling on both images F ≥ (T ) and F ≤ (T ) is entirely canonical (such as the labeling on the binary tree) and can be retrieved by only fixing the order in which to read the trees and subtrees. Then these are actually well known bijections. The one giving the final forest is often referred to as "left child = left brother" because it can be achieved directly on the unlabeled binary tree by transforming every left child node into a left brother and by leaving the right child nodes as sons. Thus in Figure 10, 2 is the left child of 3 in T and it becomes the left brother of 3 in F ≥ (T ), 9 is a right child of 7 in T and it stays the right-most child of 7 in F ≥ (T ). The increasing forest construction is then the "right child = right brother" bijection. Also, the initial and final forests of a binary tree T are indeed initial and final forests in the sense of interval-posets. In particular, they are interval-posets. The fact that they contain only increasing (resp. decreasing) relations is given by construction. It is left to check that they satisfy the Tamari axiom on all their elements: this is due to the binary search tree structure. In particular, if you interpret a binary search tree as poset by pointing all edges toward the root then it is an interval-poset. Theorem 2.14 (from [CP15]). Let T 1 and T 2 be two binary trees and R = rel(F ≥ (T 1 )) ∪ rel(F ≤ (T 2 )). Then, R is the set of relations of a poset P if and only if T 1 ≤ T 2 in the Tamari lattice. And in this case, P is an interval-poset. This construction defines a bijection between interval-posets and intervals of the Tamari lattice. There are two ways in which R could be not defining a poset. First, R could be non transitive. Because of the structure of initial and final forests, this never happens. Secondly, R could be non anti-symmetric by containing both (a, b) and (b, a) for some a, b ≤ n. This happens if and only if T 1 ≤ T 2 . You can read more about this bijection in [CP15]. Figure 11 gives an example. To better understand the relations between Tamari intervals and interval-posets, we now recall some results from [CP15, Prop. 2.9] which are immediate from the construction of interval-posets and the properties of initial and final forests. As the Tamari lattice is also often defined on Dyck paths, it is legitimate to wonder what is the direct bijection between a Tamari interval [D 1 , D 2 ] of Dyck paths and an interval-poset. Of course, one can just transform D 1 and D 2 into binary trees through the bijection of Definition 2.11 and then construct the corresponding final and initial forests. But because many statistics we study in this paper are more naturally defined on Dyck paths than on binary trees, we give the direct construction. Recall that for each up-step d in a Dyck path, there is a corresponding down-step d ′ which is the first step you meet by drawing a horizontal line starting from d. From this, one can define a notion of nesting: an up-step d 2 (and its corresponding down-step d ′ 2 ) is nested in (d, d ′ ) if it appears in between d, d ′ in the binary word of the Dyck path. This bijection is actually a very classical one. It consists of shrinking the Dyck path into a tree skeleton. In Figure 12, we show in parallel the process of Proposition 2.16 on the Dyck path and the corresponding binary tree. Step 1: label the up-steps and their corresponding down-steps from left to right. Step 2: transform nestings into poset relations. Proof. We use the recursive definition of the tree map. Let D be a non empty Dyck path and T = tree(D). We want to check that P is equal to F := F ≥ (T ). The path D decomposes into D = D 1 1D 2 0 with tree(D 1 ) = T 1 the left subtree of T and tree(D 2 ) = T 2 , the right subtree of T . We assume by induction that the proposition is true on F ≥ (D 1 ) and F ≥ (D 2 ). Let 1 ≤ k ≤ n be such that size(D 1 ) = k − 1 (in Figure 12, k = 5): then k is the label of the pair (1, 0) which appears in the decomposition of D. We also have that v k is the root of T . Now let us choose a < b ≤ n. Either • a < b < k: the pairs of steps labeled by a and b both belong to D 1 , we have b ⊳ P a if and only if b ⊳ F a by induction. • b = k: the pair labeled by a belongs to D 1 . It does not nest k, so b ⋪ P a. In T , v a is in T 1 , the left subtree of T and so we also have b ⋪ F a. • a = k: the pair labeled by b belongs to D 2 . It is nested in k, so b ⊳ P a. In T , v b belongs to T 2 the right subtree of D, we have b ⊳ F a. • k < a < b: the pairs of steps labeled by a and b both belong to D 2 , we have b ⊳ P a if and only if b ⊳ F a by induction. On binary trees, the constructions of the final and initial forests are completely symmetrical: the difference between the two only consists of a choice between left subtrees and right subtrees. Because the leftright symmetry of binary trees is not obvious when working on Dyck paths, the construction of the initial forest from a Dyck path gives a different algorithm than the final forest one. Proposition 2.17. Let D be a Dyck path of size n, we construct a directed graph following this process: • label all up-steps of D from 1 to n from left to right, • for each up-step a, find, if any, the first up-step b following the corresponding down-step of a and add the edge a −→ b. Then this resulting directed graph is the Hasse diagram of the initial forest of D. The construction is illustrated on Figure 13. Proof. We use the same induction technique as for the previous proof. As before, we have D = D 1 1D 2 0 along with the corresponding trees T , T 1 , and T 2 and size(D 1 ) = size(T 1 ) = k − 1. We set F := F ≤ (T ) and we call P the poset obtained by the algorithm. Step 1: label all up-steps from left to right. Step 2: Connect each up-step to the first-up step following its down-step. First, let us prove that for all a < k, we have a ⊳ P k. Indeed suppose there exists a < k with a ⋪ P k, we take a to be maximal among those satisfying these conditions. We have a ∈ D 1 so its corresponding down-step appears before k, let a ′ ≤ k be the first up-step following the down-step of a. If a ′ = k, then (a, k) is in the Hasse diagram of P and so a ⊳ P k. If a ′ < k, we have a ⊳ P a ′ by definition and the maximality of a gives a ′ ⊳ k which implies a ⊳ P k by transitivity. Now let us choose a < b ≤ n. Either • a < b < k: the up-steps labeled by a and b both belong to D 1 , we have a ⊳ P b if and only if a ⊳ F b by induction. • b = k: in T , b is the root and a is in its left subtree: we have a ⊳ F b. In P , we have also proved a ⊳ P b. • a = k: the corresponding down-step of a is the last step of D which means there is no edge (a, b) in P . Similarly, because a is the tree root, there is no edge (a, b) in F . • k < a < b: the up-steps labeled by a and b both belong to D 2 , we have a ⊳ P b if and only if a ⊳ F b by induction. Now that we have described the relation between interval-posets and Tamari intervals both in terms of binary trees and Dyck path, we will often identify a Tamari interval with its interval-poset. When we refer to Tamari intervals in the future, we consider that they can be given indifferently by a interval-poset or by a couple of a lower bound and an . This is also true of C(D) even though it is less obvious. It will become clear once we express the statistics in terms of planar forests. At first, let us use the definitions on Dyck paths to express our main result on Tamari intervals. Definition 3.2. Consider an interval I of the Tamari lattice described by two Dyck paths D 1 and To summarize, all the statistics we defined on Dyck paths are extended to Tamari intervals by looking at the lower bound Dyck path D 1 when considering contacts and the upper bound Dyck path D 2 when considering rises. Most of these statistics have been considered before on both Dyck paths and Tamari intervals. In [BMFPR11], one can find the same definitions for the initial rise r 0 (I) and number of non-final contacts c 0 (I). Taking x 0 = y 0 = 1 in C(I, X) and R(I, Y ) corresponds to ignoring 0 values in C(I) and R(I): we find those monomials in Préville-Ratelle's thesis [PR12]. Our definition of C(I, X) is slightly different than the one of Préville-Ratelle: we will explain the correspondence in the more general case of m-Tamari intervals in Section 5. We now describe another statistic from [PR12] which is specific to Tamari intervals: it cannot be defined through a Dyck path statistics on the interval end points. Definition 3.3. Let I = [D 1 , D 2 ] be an interval of the Tamari lattice. A chain between D 1 and D 2 is a list of Dyck paths which connects D 1 and D 2 in the Tamari lattice. If the chains comprises k elements, we say its of length k − 1 (the number of cover relations). We call the distance of I and write d(I) the maximal length of all chains between D 1 and D 2 . For example, if I = [D, D] is reduced to a single element, then d(I) = 0. If I = [D 1 , D 2 ] and D 1 ≤ D 2 is a cover relation of the Tamari lattice, then d(I) = 1. This statistic was first described in [BPR12], it generalizes the notion of area of a Dyck path to an interval. To finish, we need the notation size(I) which is defined to be the size of the elements of I: if I is an interval of Dyck paths of size n, then size(I) = n. Note that is is also the number of vertices of the intervalposet representing I.We can now state the first version of the main result of this paper. For x 0 = y 0 = 1, this corresponds to a special case of [PR12, Conjecture 17] where m = 1, the general case will be dealt in Section 5. The case where X, Y, and q are set to 1 is proved algebraically in [BMFPR11]. In this paper, we give a combinatorial proof by describing an involution on Tamari intervals that switches c 0 and r 0 as well as C and R. The involution is described in Section 4. First, we need some interpretations of the statistics at hands in terms of interval-posets. Definition 3.5. Let I be an interval-poset of size n, we define • dc 0 (I) (resp. ic ∞ (I)) is the number of decreasing (resp. increasing) roots of I. • dc i (I) (resp. ic i (I)) for 1 ≤ i ≤ n is the number of decreasing (resp. increasing) children of the vertex i. Note that we do not include dc n nor ic 1 in the corresponding vectors as they are always 0. The vertices of I are read in their natural order in DC and in reverse order in IC: this follows a natural traversal of the final (resp. initial) forests from roots to leaves. As an example, in Figure 3, we have DC(I) = (3, 0, 2, 0, 0, 4, 0, 0, 1, 0) and IC(I) = (4, 2, 0, 0, 1, 0, 2, 1, 0, 0). Proof. This is clear from the construction of the final forest from the Dyck path given in Proposition 2.16. Indeed, each non-final contact of the Dyck path corresponds to exactly one decreasing root of the interval-poset. Then the decreasing children of a vertex are the contacts of the Dyck path nested in the corresponding (up-step, down-step) tuple. Remark 3.7. The vector IC(I) is not equal to R(I) in general. In fact, the interpretation of rises directly on the interval-poset is not easy. What we will prove anyway is that the two vectors can be exchanged through an involution on I. This involution is shown in Section 4 and is a crucial step in proving Theorem 3.4. Distance and Tamari inversions. Before describing the involutions used to prove Theorem 3.4, we discuss more the distance statistics on Tamari intervals in order to give a direct interpretation of it on interval-posets. We write TInv(I) the set of Tamari inversions of a set I. As an example, the Tamari inversions of the interval-poset of Figure 3 are exactly (1, 2), (1, 5), (7, 8), (9, 10). As counter examples, you can see that (1, 6) is not a Tamari inversion because we have 1 < 5 < 6 and 6 ⊳ 5. Similarly, (6, 8) is not a Tamari inversion because there is 6 < 7 < 8 and 6 ⊳ 7. Note also that if (a, b) is a Tamari inversion of I, then a ⋪ b and b ⋪ a. Our goal is to prove the following statement. The proof of Proposition 3.9 requires two inner results that we express as Lemmas. Proof. By Proposition 2.15, we know that I ′ is a decreasing-extension of I. This Lemma is then just a refinement of Proposition 2.15 which states that the decreasing relations that have been added come from the Tamari inversions of I. Let (b, a) be a decreasing-cover relation of I ′ such that b ⋪ I a. Because I ′ is an extension of I, we also know that a ⋪ I b. Let k be such that a < k < b. Because we have b ⊳ I ′ a, the Tamari axiom on a, k, b gives us k ⊳ I ′ a. This implies that b ⋪ I ′ k as (b, a) is a decreasing-cover relation of I ′ by hypothesis. In particular, we cannot have b ⊳ I k either as any relation of I is also a relation of I ′ . Similarly, we cannot have a ⊳ I k as this would imply a ⊳ I ′ k, contradicting k ⊳ I ′ a. Proof. Because (a, b) is a Tamari inversion of I, we have b ⋪ I a and a ⋪ I b which means the relation (b, a) can be added to I as a poset. We need to check that the result I ′ is still a interval-poset. Let us first prove that for all k such that a < k < b, we have k ⊳ I a. Let us suppose by contradiction that there exist a < k < b with k ⋪ I a and let us take the minimal k possible. Note that (a, k) is smaller than (a, b) in the lexicographic order which implies that (a, k) is not a Tamari inversion. If there is k ′ such that a < k ′ ≤ k with a ⊳ I k ′ then (a, b) is not a Tamari inversion. So there is k ′ with a ≤ k ′ < k with k ⊳ I k ′ . But because we took k minimal, we get k ′ I a which implies k ⊳ I a and contradicts the fact that (a, b) is a Tamari inversion. Now, we show that the Tamari axiom is satisfied by all triplets a ′ < k < b ′ . By Remark 2.6, we only have to consider decreasing relations. More precisely, the only cases to check are the ones where b ′ ⋪ I a ′ and b ′ ⊳ I ′ a ′ which means a I a ′ and b ′ I b (the relation is either directly added through (b, a) or obtained by transitivity). Let us choose such a couple (a ′ , b ′ ). • transitivity and contradicts our initial hypothesis. • If a < a ′ < b then we have a ⊳ a ′ and (a, b) is not a Tamari inversion. • The only case left is a ′ ≤ a < b ≤ b ′ . Now for k such that a ′ < k < b ′ , if k < a we get k ⊳ I a ′ by the Tamari axiom on (a, k, a ′ ). If a < k < b, we have proved that k ⊳ I a and so k ⊳ I a ′ . If b < k < b ′ , the Tamari axiom on (b, k, b ′ ) gives us k ⊳ I b and by transitivity k ⊳ I ′ a ′ . In all cases, the Tamari axiom is satisfied in I ′ for (a ′ , k, b ′ ). There is left to prove that the number of Tamari inversions of I ′ has been reduced by exactly one. More precisely: all Tamari inversions of I are still Tamari inversions of Suppose that we have b ′ ⊳ I ′ k which means that it has been added by transitivity and so we have b ′ ⊳ I b and a ⊳ I k. No increasing relation has been created in I ′ and so a ′ ⋪ I ′ k. we have a ⋪ I k and by the same argument as earlier that no increasing relation has been created in I ′ , a ⋪ I ′ k. Proof of Proposition 3.9. Let I be an interval-poset containing v Tamari inversions and whose bounds are given by two binary trees [T 1 , T 2 ]. Suppose there is a chain of length k between T 1 and T 2 . In other words, we have k + 1 binary trees which connects T 1 and T 2 in the Tamari lattice. Let us look at the intervals [P i , T 2 ]. Lemma 3.10 tells us that each of them can be obtained by adding decreasing relations (b, a) to I where (a, b) ∈ TInv(I). We now apply Proposition 2.15. In our situation, it means that, for 1 ≤ j ≤ k + 1, the interval-poset of [P j , T 2 ] is an extension of every intervalposets [P i , T 2 ] with 1 ≤ i ≤ j: the Tamari inversions that were added as decreasing relations in [P i , T 2 ] are kept in [P j , T 2 ]. In other words, to obtain P i+1 from P i , one or more Tamari inversions of I are added to P i as decreasing relations. At least one Tamari inversion is added at each step which implies that v ≥ k. This is true for all chain and thus v ≥ d(I). Now, let us explicitly construct a chain between T 1 and T 2 of length v. This will give us that v ≤ d(I) and conclude the proof. We proceed inductively. • If v = 0, then d(I) ≤ v is also 0 which means T 1 = T 2 : this is a chain of size 0 between T 1 and T 2 . • We suppose v > 0 and we apply Lemma 3.11. We take the first Tamari inversion of TInv(I) and add it to I as a decreasing relation. We obtain an interval-poset I ′ with v − 1 Tamari inversions which is a decreasing-extension of I. Then by Proposition 2.15, the bounds of I ′ are given by [T ′ 1 , T 2 ] with T ′ 1 > T 1 . By induction, we construct a chain of size v − 1 between T ′ 1 and T 2 which gives us a chain of size v between T 1 and T 2 . The interpretation of the distance of an interval as a direct statistic on interval-posets is very useful for our purpose here as it gives an explicit way to compute it and its behavior through our involutions will be easy to state and prove. It is also interesting in itself. Indeed, this statistic appears in other conjectures on Tamari intervals, for example Conjecture 19 of [PR12] which is related to the well known open q-t-Catalan problems. 4.1. Grafting of interval-posets. In this section, we revisit some major results of [CP15] which we will be used to define some new involutions. Definition 4.1. Let I 1 and I 2 be two interval-posets, we define a left grafting operation and a right grafting operation depending on a parameter r. Let α and ω be respectively the label of minimal value of I 2 (shifted by the size of I 1 ) and the label of maximal value of I 1 . Let c = c 0 (I 2 ) and y 1 , . . . y c be the decreasing roots of I 2 (shifted by the size of I 1 ). The left grafting of I 1 over I 2 with size(I 2 ) > 0 is written I 1 • I 2 . It is defined by the shifted concatenation of I 1 and I 2 along with relations y ⊳ α for all y ∈ I 1 . The right grafting of I 2 over I 1 with size(I 1 ) > 0 is written I 1 ← − δ r I 2 with 0 ≤ r ≤ c. It is defined by the shifted concatenation of I 1 and I 2 along with relations y i ⊳ ω for 1 ≤ i ≤ c. Figure 15 gives an example. Note that the vertices of I 2 are always shifted by the size of I 1 . For simplicity, we do not always recall this shifting: when we mention a vertex x of I 2 in a grafting, we mean the shifted version of x. These two operations were defined in [CP15, Def. 3.5]. Originally, the right grafting was defined as a single operation ← − δ which result was a formal sum of interval-posets. In this paper, it is more convenient to cut it into different sub-operations depending on a parameter. We can use these operations to uniquely decompose interval-posets: this will be explained in Section 4.2. First, we will study how the different statistics we have defined are affected by the operations. We start with the contact vector C which is equal to the final forest vector DC. Proof. First, remember that, by Proposition 3.6, contacts can be directly computed on the final forest of the interval-posets: the non-final contacts correspond to the number of components and c v for 1 ≤ v ≤ n is the number of decreasing children of the vertex v. Now, in the left grafting I 1 • I 2 , the two final forests are simply concatenated. In particular, c 0 (I 1 • I 2 ) = c 0 (I 1 ) + c 0 (I 2 ). The contact vector C(I 1 • I 2 ) is then formed by this initial value followed by the truncated contact vector of I 1 , then an extra 0 which correspond to c n , then the truncated contact vector of I 2 . The contacts of the right grafting I 1 ← − δ i I 2 depend on the parameter i. Indeed, each added decreasing relation merges one component of the final forest of I 2 with the last component of the final forest of I 1 and thus reduces the number of components by one. As a consequence, we have c 0 (I 1 ← − δ i I 2 ) = c 0 (I 1 ) + c 0 (I 2 ) − i. The contact vector is formed by this initial value followed by the truncated contact vector of I 1 , then the new number of decreasing children of n which is i by definition, then the truncated contact vector of I 2 . Proof. When we compute I 1 • I 2 , we add increasing relations from all vertices of I 1 to the first vertex α of the shifted copy of I 2 . In other words, we attach all increasing roots of I 1 to a new root α. The number of components in the initial forest of I 1 • I 2 is then given by ic ∞ (I 2 ) (the last component contains I 1 ) and the number of increasing children of α is given by ic ∞ (I 1 ). Other number of increasing children are left unchanged and we thus obtain the expected vector. In the computation of I 1 ← − δ i I 2 , the value of i only impacts the decreasing relations and thus does not affects the vector IC. No increasing relation is added which means that the initial forests of I 1 and I 2 are only concatenated and by looking at connected components, we obtain ic ∞ (I 1 ← − δ i I 2 ) = ic ∞ (I 1 ) + ic ∞ (I 2 ). The vector IC is formed by this initial value followed by the truncated initial forest vector of I 2 , then an extra 0 which correspond to ic 1 (I 2 ), then the truncated initial forest vector of I 1 . To understand how the rise vector behaves through the grafting operations, we first need to interpret the grafting on the upper bound Dyck path of the interval. We start with the left grafting. • r 0 (I 1 • I 2 ) = r 0 (I 1 ); • R(I 1 • I 2 ) = (r 0 (I 1 ), r 1 (I 1 ), . . . , r n−1 (I 1 ), r 0 (I 2 ), r 1 (I 2 ), . . . , r m−1 (I 2 )). Proof. The definition of I 1 • I 2 states that we add all relations (i, α) with i ∈ I 1 and α the first vertex of I 2 . This is the same as adding all relations (i, α) where i is an increasing root of I 1 (the other relations are obtained by transitivity). The increasing roots of I 1 correspond to the up-steps of D 1 which corresponding down-steps do not have a following up-step, i.e., the up-steps corresponding to final down-steps of D 1 . By concatenating D 1 and D 2 , the first up-step of D 2 is now the first following up-step of the final down-steps of D 1 : this indeed adds the relations from the increasing roots of I 1 to the first vertex of I 2 . The expressions for the initial rise and rise vectors follow immediately by definition. The effect of the right grafting on the rise vector is a bit more technical. For simplicity, we only study the case where I 1 is of size one which is the only one we will use in practice. Proof. The right-grafting only adds decreasing relations. On the initial forests, it is then nothing but a concatenation of the two initial forests. In particular, in the case of u ← − δ i I, no increasing relation is added from the vertex one to any vertex of I. On the upper bound Dyck path, this means that the down-step corresponding to the initial up-step is not followed by any up-steps: the Dyck path of I has to be nested into this initial up-step. The expressions for the rise vector follows immediately. Remark 4.6. When applying a right-grafting on u, the interval-poset of size 1, the rise vector and the initial forest vector have similar expressions: This will be a fundamental property when we define our involutions. Note also that we can have size(I 2 ) = 0 in all these expressions. Now, the only statistic which is left to study through the grafting operations is the distance. Recall that by Proposition 3.9, it is given by the number of Tamari inversions. In the same way as for the R vector, it is more complicated to study on the right grafting in which case, we will restrict ourselves to size(I 1 ) = 1. Proposition 4.7. Let I 1 and I 2 be two interval-posets, and u be the interval-poset of size one. Then Look for example at Figure 16: the Tamari inversion (1, 3) of I 1 and (1, 2) of I 2 are kept through I 1 • I 2 and no other Tamari inversion is added. For the right grafting, you can look at Figure 17: the intervalposet I 2 only has one Tamari inversion (1, 3) and we have c 0 (I 2 ) = 2. You can check that d(u ← − δ 1 I 2 ) = 2 = 1+2−1, the two Tamari inversions being (2, 4) and (1, 4). Proof. We first prove d(I 1 • I 2 ) = d(I 1 ) + d(I 2 ). The condition for a couple (a, b) to be a Tamari inversion is local: it depends only on the values a ≤ k ≤ b. Thus, because the local structure of I 1 and I 2 is left unchanged, any Tamari inversion of I 1 and I 2 is kept in I 1 • I 2 . Now, suppose that a ∈ I 1 and b ∈ I 2 . Let α be the label of minimal value in I 2 (which has been shifted by the size of I 1 ). By definition, we have a < α ≤ b and a ⊳ α in I 1 • I 2 : (a, b) is not a Tamari inversion. Now, let I = u ← − δ i I 2 with 0 ≤ i ≤ c 0 (I 2 ) and let us prove that d(I) = d(I 2 ) + c 0 (I 2 ) − i. Once again, note that the Tamari inversions of I 2 are kept through the right grafting. For the same reason, the only Tamari inversions that could be added are of the form (1, b) with b ∈ I 2 . Now, let b be a vertex of I 2 which is not a decreasing root. This means there is a < b with b ⊳ I 2 a. In I, the interval-poset I 2 has been shifted by one and so we have: 1 < a < b with b ⊳ I a: (1, b) is not a Tamari inversion of I. Let b be a decreasing root of I 2 . If b ⊳ I 1 then (1, b) is not a Tamari inversion. If b ⋪ I 1, we have that: by construction, there is no a ∈ I 2 with 1 ⊳ I a; because b is a decreasing root there is no a ∈ I 2 with a < b and b ⊳ a. • The operation . In practice, we think of it as I L • (u ← − δ r I R ). The parameter r identifies which element is I in the composition sum. Definition 4.10. Let T be a binary tree of size n. We write v 1 , . . . , v n the nodes of T taken in in-order (following the binary search tree labeling). Let ℓ : {v 1 , . . . , v n } → N be a labeling function on T . For all subtrees T ′ of T , we write size(T ′ ) the size of the subtree and labels(T ′ ) := v i ∈T ′ ℓ(v i ) the sum of the labels of its nodes. We say that (T, ℓ) is a Tamari interval grafting tree, or simply grafting tree if the labeling ℓ satisfies that for every node v i , we have An example is given in Figure 19: the vertices v 1 , . . . , v 8 are written in red above the nodes, whereas the labeling ℓ is given inside the node. For example, you can check the rule on the root v 4 , we have size(T R (v 4 )) − labels(T R (v 4 )) = 4 − 1 = 3 and indeed ℓ(v 4 ) = 2 ≤ 3. The rule is satisfied on all nodes. Note that if the right subtree of a node is empty (which is the case for v 1 , v 3 , v 6 , and v 8 ) then the label is always 0. Proof. First, let us check that we can obtain a interval-poset from a grafting tree. We read the grafting tree as an expression tree where each empty subtree is replaced by an entry as an empty interval-poset and each node corresponds to the operation I L • u ← − δ r I R where r is the label of the node, I L and I R the respective results of the left and right subtrees, and u the interval poset of size 1. In other words, the interval-poset I = ∆ −1 (T, ℓ) where (T, ℓ) is a grafting tree is computed recursively by • if T is empty then I = ∅; and ℓ L and ℓ R the labeling function ℓ restricted to respectively T L and T R . We need to check that the operation u ← − δ r ∆ −1 (T R , ℓ R ) is well defined i.e, in the case where T is not empty, that we have 0 ≤ r ≤ c 0 (∆ −1 (T R , ℓ R )). We do that by also proving by induction that c 0 (∆ −1 (T, ℓ)) = size(T ) − labels(T ). This is true in the initial case where T is empty: c 0 (∅) = 0. Now, suppose that T = v k (T L , T R ) with ℓ(v k ) = r and that the property is satisfied on (T L , ℓ L ) and (T R , ℓ R ). We write I L = ∆ −1 (T L , ℓ L ) and I R = ∆ −1 (T R , ℓ R ). In this case, I ′ := u ← − δ r I R is well defined because we have by definition that r ≤ size(T R ) − labels(T R ), which by induction is c 0 (I R ). Besides, by Proposition 4.2, we have c 0 (I ′ ) = 1 + c 0 (I R ) − r. We now compute I = I L • I ′ and we get c 0 (I) = c 0 (I L ) + 1 + c 0 (I R ) − r which is by induction size(T L ) − labels(T L ) + 1 + size(T R ) − labels(T R ) − r = size(T ) − labels(T ). Conversely, it is clear from Proposition 4.8 that the grafting decomposition of an interval-poset I gives a unique labeled binary tree. We need to prove that the condition on the labels holds. Once again, this is done inductively. An empty interval-poset gives an empty tree and the condition holds. Now if I decomposes into the triplet (I L , I R , r) we suppose that the condition holds on (T L , ℓ L ) = ∆(I L ) and (T R , ℓ R ) = ∆(I R ). We know that 0 ≤ r ≤ c 0 (I R ) and we have just proved that c 0 (I R ) is indeed size(T R ) − labels(T 2 ). In other words, the grafting tree of an interval-poset can be obtained directly without using the recursive decomposition. Also, the tree T only depends on the initial forest and the labeling ℓ only depends on the final forest. Proof. We prove the result by induction on I. If I is empty, there is nothing to prove. We then suppose that I decomposes into a triplet (I L , I R , r) with k = size(I L ) + 1. We suppose by induction that the proposition is true on I L and For example, on Figure 19, we have all d i = 0 except for d 4 = 4 − 1 − 2 = 1 and d 5 = 3 − 1 = 2. This indeed is consistent with d(I) = 3, the 3 Tamari inversions being (4, 7), (5, 6), and (5, 7). More precisely, the number d i is the number of Tamari inversions of the form (i, * ). Proof. Once again, we prove the property inductively. This is true for an empty tree where we have d(I) = 0. Now, let I be a nonempty interval-poset, then I decomposes into a triplet (I L , I R , r) with Proposition 4.7 gives us (4.12) 4.3. Left branch involution on the grafting tree. We now give an interesting involution on the grafting tree which in turns gives an involution on Tamari intervals. We call right hanging binary trees the binary trees whose left subtree is empty. An alternative way to see a binary tree is to understand it as list of right hanging binary trees grafted together on its left-most branch. For example, the tree of Figure 19 can be decomposed into 3 right hanging binary trees : the one with vertex v 1 , the one with vertices v 2 and v 3 and the one with vertices v 4 to v 8 . Figure 20, The only thing to check is that ℓ ′ still satisfies the grafting tree condition. This is immediate. Indeed, for v i ∈ T , and T R (v i ) its right subtree, we have ℓ( its right subtree, even though T ′ R might be different from T R , the statistics are preserved: size(T ′ R (v i ′ )) = size(T R (v i )) and labels(T ′ R (v i ′ )) = labels(T R (v i )), because the involution only acts on left branches. As a consequence, we now have an involution on Tamari intervals. Definition 4.17 (The Left Branch Involution). The left branch involution on Tamari intervals is defined by the left branch involution on their grafting trees. φ(I) := ∆ −1 (φ(∆(I))) (4.13) The grafting tree seems to be the most natural object to describe the involution. Indeed, even though it can be easily computed on intervalposets using decomposition and graftings, we have not seen any simple direct description of it. Furthermore, if we understand the interval as a couple of a lower bound and upper bound, then the action on the upper bound is simple: the shape of the upper bound binary tree is given by the grafting tree and so the involution on the upper bound is only the classical left-branch involution. Nevertheless, the action on the lower bound cannot be described as an involution on binary trees: it depends on the corresponding upper bound. One way to understand this involution is that we apply the left-branch involution on the upper bound binary tree and the lower bounds "follows" in the sense given by the labels of the grafting tree. In other words, the involution exchanges the rise vector and initial forest vector while leaving unchanged the number of contacts, the contact monomial, and the distance. Proof. Points (4.14) and (4.15) are immediate. Indeed, (4.5) tells us that c 0 (I) is given by size(∆(I)) − labels(∆(I)) : this statistic is not changed by the involution. Now remember that, by Proposition 4.2, the values c 1 (I), . . . , c n (I) are given by ℓ(v 1 ), . . . , ℓ(v n ), so . This monomial is commutative and the involution sending ℓ to ℓ ′ only applies a permutation on the indices: the monomial itself is not changed. Also, we always have ℓ(v n ) = ℓ ′ (v n ) = 0 so the division by x 0 is still possible after the permutation and still removes the last value x ℓ ′ (vn) . As an example, on Figure 20, we have C( Point (4.16) is also immediate by Proposition 4.14. Indeed, for all We prove point (4.17) by induction. It is trivially true when size(I) = 0 (both vectors are empty). Now suppose that I is an interval-poset of size n > 0. Let (T, ℓ) = ∆(I), then T is a non-empty binary tree which can be seen as a list of k non-empty right hanging binary trees T 1 , . . . , T k where T 1 is the left-most one and T k is at the root. We write I 1 , . . . I k the corresponding interval-posets and I ′ 1 , . . . , I ′ k their respective images through φ. By definition of ∆(I), we have that At this point, either k > 1 which means the sizes of each intervals I 1 , . . . , I k is strictly smaller than n and we can conclude by induction. Otherwise, k = 1 and φ(I) is a right hanging binary tree. We then have where u is the interval-poset of size 1, J is an interval-poset of size n − 1 and r is a parameter 0 ≤ r ≤ c 0 (J). We write J ′ = φ(J). By The complement involution and rise-contact involution. The left-branch involution φ is not yet what we need to prove Theorem 3.4. Indeed, we want an involution which exchanges the contact monomial with the rise monomial and the number of non-final contacts with the number of initial rises. Nevertheless, the left-branch involution will be our crucial element to build it, combined with another involution which we define now. Proof. Every increasing relation a ⊳ I b is sent to a decreasing relation (n+1−a) ⊳ ψ(I) (n+1−b). In particular, each connected component of the initial forest of I is sent to exactly one connected component of the final forest of ψ(I) and so ic ∞ (I) = dc 0 (ψ(I)). Now, if a vertex b has k increasing children in I, its image (n+1−b) has k decreasing children in ψ(I) so ic b (I) = dc n+1−b (ψ(I)). Remember that IC * reads the numbers of increasing children in reverse order from n to 2 whereas DC * reads the number of decreasing children in the natural order from 1 = n+1−n to n − 1 = n + 1 − 2. We conclude that IC(I) = DC(ψ(I)). More precisely, (a, b) is a Tamari inversion of I if and only if (n + 1 − b, n + 1 − a) is a Tamari inversion of ψ(I). Proof. Let a < b be two vertices of I, we set a ′ = n + 1 − b and b ′ = n + 1 − a. • There is a ≤ k < b with b ⊳ I k if and only if there is In other words, (a, b) is a Tamari inversion of I if and only if (a ′ , b ′ ) is a Tamari inversion of ψ(I). By Proposition 3.9, this gives us d(I) = d(ψ(I)). You can check on Figure 21 that I has 3 Tamari inversions (1, 5), (2, 3), and (2, 5) which give respectively the Tamari inversions (4, 8), (6, 7), and (4, 7) in ψ(I). We are now able to state the following Theorem which gives an explicit combinatorial proof of Theorem 3.4. We give an example computation on Figure 22. You can run more examples and compute tables for all intervals using the provided live Sage-Jupyter notebook [Pon]. 5. The m-Tamari case 5.1. Definition and statement of the generalized result. The m-Tamari lattices are a generalization of the Tamari lattice where objects have a (m + 1)-ary structure instead of binary. They were introduced in [BPR12] and can be described in terms of m-ballot paths. A m-ballot path is a lattice path from (0, 0) to (nm, n) made from horizontal steps (1, 0) and vertical steps (0, 1) which always stays above the line y = x m . When m = 1, a m-ballot path is just a Dyck path where up-steps and down-steps have been replaced by respectively vertical steps and horizontal steps. They are well known combinatorial objects counted by the m-Catalan numbers (5.1) 1 mn + 1 (m + 1)n n . They can also be interpreted as words on a binary alphabet and the notion of primitive path still holds. Indeed, a primitive path is a mballot path which does not touch the line y = x m outside its extremal points. From this, the definition of the rotation on Dyck path given in Section 2.2 can be naturally extended to m-ballot-paths, see Figure 23. When interpreted as a cover relation, the rotation on m-ballot paths induces a well defined order, and even a lattice [BPR12]. This is what we call the m-Tamari lattice or T (m) n , see Figure 24 for an example. The intervals of m-Tamari lattices have also been studied. In [BMFPR11], it was proved that they are counted by (5.2) I n,m = m + 1 n(mn + 1) (m + 1) 2 n + m n − 1 . They were also studied in [CP15] where it was shown that they are in bijection with some specific families of Tamari interval-posets. Our goal here is to use this characterization to generalize Theorem 3.4 to intervals of m-Tamari, thus proving Conjecture 17 of [PR12]. First, let us introduce the m-statistics which correspond to the classical cases statistics defined in Definition 3.1. Definition 5.1. Let B be a m-ballot path. We define the following m-statistics. An example is given on Figure 25. When m = 1, this is the same as Definition 3.1. Note also that we will later define a bijection between m-ballot paths and certain families of Dyck paths which also extends to intervals: basically any element of T (m) n can also be seen as an element of T n×m but the statistics are not exactly preserved which is why we use slightly different notations for m-statistics to avoid any confusion. . . , a n ], the area vector of B. We partition the values of a such that a i and a j are in the same set if a i = a j and for all i ′ such that i ≤ i ′ ≤ j, then a i ′ ≥ a i . Let λ = (λ 1 ≥ λ 2 ≥ · · · ≥ λ k ) be the integer partition obtained by keeping only the set sizes and let e(B, X) = x λ 1 . . . x λ k a monomial on a commutative alphabet X. Then e(B, X) = C m (B, X) with x 0 = 1. The definition of e(B, X) comes from [PR12, Conjecture 17]. As an example, the area vector of the path from Figure 25 is (0, 1, 2, 4, 2, 4, 4, 0). The set partition is {{a 1 , a 8 }, {a 2 }, {a 3 , a 5 }, {a 4 }, {a 6 , a 7 }}. In particular, the area vector always starts with a 0 and each new 0 corresponds to a contact between the path and the line. Here, we get λ = (2, 2, 2, 1, 1) which indeed gives e(B, X) = x 2 1 x 3 2 = C(B, X) at x 0 = 1. Proof. If the step u i starts at a point (x, y), then we have by definition my = x + a i . In particular, if a i = a j , then u i and u j both have a contact with a same affine line s of slope 1 m . Then a i and a j belong to the same set in the partition if and only if the path between u i and u j stays above the line s. More precisely, the line s cuts a section p of the path, starting at some point (a, b + j m ) where (a, b) is the starting point of a vertical step and 1 ≤ j ≤ m. The non-final contacts of this path p with the line s are exactly the vertical steps u k with a k = a i . The final contact corresponds either to the end of the path B or to a horizontal step: it does not correspond to an area a k = a i . As for the classical case, we now extend those definitions to intervals of the m-Tamari lattice. Finally, the definition of distance naturally extends to m-Tamari. We can now state the generalized version of Theorem 3.4. As the m-Tamari lattice can be understood as an upper ideal of the Tamari lattice, it follows that the intervals of T (m) n are actually a certain subset of intervals of T n×m : they are the intervals where both upper and lower bounds are m-Dyck paths (in practice, it is sufficient to check that the lower bound is a m-Dyck path). It is then possible to represent them as interval-posets. This was done in [CP15] where the following characterization was given. for all 1 ≤ i ≤ n. Figure 27. The m-binary trees have a (m + 1)-ary recursive structure: this is the key element to prove Theorem 5.9 and we will also use it in this paper. Definition 5.10. The m-binary trees are defined recursively by being either the empty binary tree or a binary tree T of size m×n constructed from m + 1 subtrees T L , T R 1 , . . . , T Rm such that • the sum of the sizes of T L , T R 1 , . . . , T Rm is mn − m; • each subtree T L , T R 1 , . . . , T Rm is itself a m-binary tree; • and T follows the structure bellow. T Rm The left subtree of T is T L . The right subtree of T is constructed from T R 1 , . . . , T Rm by the following process: graft a an extra node to the left of the leftmost node of T R 1 , then graft T R 2 to the right of this node, then graft an extra node to the left of the leftmost node of T R 2 , then graft T R 3 to the right of this node, and so on. Note that in total, m extra nodes were added: we call them the mroots of T . Figure 28 gives two examples of m-binary trees for m = 2 with their decompositions into 3 subtrees. More examples and details about the structure can be found in [CP15]. In particular, m-binary trees are the images of m-Dyck paths through the bijection of Definition 2.11. When working on the classical case, we could safely identify an interval of the Tamari lattice and its representing interval-poset. For m = 1, we need to be a bit more careful and clearly separate the two notions. Indeed, the m-statistics from Definition 5.3 of an interval of T (m) n are not equal to the statistics of its corresponding interval-poset Figure 28. Examples of m-binary trees for m = 2: T L is in red, T R 1 is in dotted blue and T R 2 is in dashed green. In the second example, T R 1 is empty. from Definition 3.2. They can anyway be retrieved through simple operations. Proposition 5.11. Let I be an interval of T Proof. All identities related to rises and contacts are a direct consequence of Proposition 5.7. Only (5.14) needs to be proved, which is actually also direct: T (m) n is isomorphic to the ideal of m-Dyck path in T n×m and so the distance between two paths in the lattice stays the same. In particular, m-interval-posets are rise-m-divisible but not necessary contact-m-divisible. Besides, we saw that rise-m-divisible Dyck paths were exactly m-Dyck paths, but the set of rise-m-divisible intervalposets is not equal to m-interval-posets. Indeed, an interval whose upper bound is a m-Dyck path is rise-m-divisible but it can have a lower bound which is not a m-Dyck path and so it is not a m-intervalposet. Furthermore, it is quite clear that the set of m-interval-posets is not stable through the rise-contact involution β. Indeed, the image of a m-interval-poset would be contact-m-divisible but not necessary risem-divisible. In this section, we describe a bijection between the set of m-interval-posets and the set of rise-contact-m-divisible intervals. This bijection will allows us to define an involution on m-interval-posets which proves Theorem 5.5. Definition 5.13. Let (T, ℓ) be a grafting tree of size nm and v 1 , . . . , v nm be the nodes of T taken in in-order. We say that (T, ℓ) is a m-graftingtree if ℓ(v i ) ≥ 1 for all i such that i ≡ 0 mod m. Proposition 5.14. An interval-poset I is a m-interval-poset if and only if ∆(I) is a m-grafting-tree. As an example, the top and bottom grafting trees of Figure 30 are m-grafting trees: you can check that every odd node has a non-zero label. The corresponding m-interval-posets are drawn on the same lines. Proposition 5.14 is a direct consequence of Theorem 5.9 and Proposition 4.12. Proof. This is immediate by Proposition 4.12: (T, ℓ) corresponds to a m-interval-poset I. In particular, the upper bound of I is a m-binary tree which is equal to T . Proof. This proposition contains different results which we organize as claims and prove separately. These two properties are intrinsically linked, we will prove both at the same time by induction on the recursive structure of m-binarytrees. Let (T, ℓ) be a m-grafting tree. By Proposition 5.15, T is a m-binary tree. If T is empty, then there is nothing to prove. Let us suppose that T is non empty: it can be decomposed into m + 1 subtrees T L , T R 1 , . . . , T Rm which are all m-grafting trees. By induction, we suppose that they satisfy the claim. Let us first focus on the case where T L is the empty tree. Then v 1 (the first node in in-order) is the root and moreover, the m-roots are v 1 , . . . v m . We call T 1 , T 2 , . . . , T m the subtrees of T whose roots are respectively v 1 , . . . , v m (in particular, T 1 = T ). See Figure 29 for an illustration. Figure 29. Illustration of T 1 , . . . , T m In particular, for 1 ≤ k < m, the tree T k follows a structure that depends on T R k and T k+1 as shown in Figure 29 and T m depends only on T Rm . Note that T 2 , . . . T k are grafting trees but they are not mgrafting trees whereas T R 1 , . . . , T Rm are. Following Definition 4.10, the structure gives us for 1 ≤ k < m and (5.17) ℓ(v m ) ≤ c 0 (T Rm , ℓ). Also, for 1 ≤ k < m, we have ℓ ′ (v k ) = m(ℓ(v k ) − 1) ≥ 0 (indeed remember that ℓ(v k ) ≥ 1 because (T, ℓ) is a m-grafting-tree) and ℓ ′ (v m ) = mℓ(v m ) ≥ 0. To prove that (T, ℓ ′ ) is a grafting tree, we need to show We simultaneously prove The case k = 1 in (5.18) and (5.20) proves the claim. We start with k = m and then do an induction on k decreasing down to 1. By hypothesis, we know that (T Rm , ℓ) satisfies the claim. In particular (T Rm , ℓ ′ ) is a grafting tree and c 0 (T Rm , ℓ ′ ) = m c 0 (T Rm , ℓ). i.e., case k = m of (5.20). Now, we choose 1 ≤ i < m and assume (5.18) and (5.20) to be true for k > i. We have ℓ ′ (v i ) = m (ℓ(v i ) − 1), so (5.16) gives us ℓ ′ (v i ) ≤ m c 0 (T R i , ℓ) + m c 0 (T i+1 , ℓ) − m (5.23) = c 0 (T R i , ℓ ′ ) + c 0 (T i+1 , ℓ ′ ) + i − m using (5.20) with k = i + 1. As i < m, this proves (5.18) for k = i. Now, the structure of T i gives us The case where T L is not the empty tree is left to consider but actually follows directly. The claim is true on T L by induction as its size is strictly smaller than T . LetT be the tree T where you remove the left subtree T L . ThenT is still a m-grafting tree and the above proof applies. The expansion on T consists of applying the expansion independently on T L andT and we get c 0 (T, ℓ ′ ) = c 0 (T L , ℓ ′ )+c 0 (T , ℓ ′ ) = m c 0 (T, ℓ). T is still a m-binary tree, which by Proposition 4.12, means that the upper bound of ∆ −1 (T, ℓ ′ ) is a m-binary tree: it corresponds to a m-Dyck path and is then m-rise-divisible. We have just proved that c 0 (T, ℓ ′ ) = m c 0 (T, ℓ) is a multiple of m. By Proposition 4.2 the rest of the contact vector is given by reading the labels on T : by definition of ℓ ′ , all labels are multiples of m. We define (T, ℓ) = contract(T, ℓ ′ ) to make it the inverse of the expand operation: As earlier, we simultaneously prove that (T, ℓ) is a m-grafting tree and that c 0 (T, ℓ) = c 0 (T,ℓ ′ ) m . Our proof follows the exact same scheme as for Claim 1. We recursively decompose T into T L , T R 1 , . . . , T Rm . As earlier, the only case to consider is actually when T L is empty. We use the decomposition of T depicted in Figure 29 and prove (5.20) and (5.16) by induction on k decreasing from m to 1. The case where k = m is straightforward: we have that (5.19) implies (5.17) and (5.22) is still true. Now, we choose 1 ≤ i < m and assume (5.16) and (5.20) to be true for k > i. Using (5.18), we get We have 0 < i m < 1 and because ℓ(v i ) is an integer then (5.16) is true. Besides, by definition of ℓ, ℓ(v i ) ≥ 1 which satisfies the m-grafting tree condition. The rest of the induction goes smoothly because (5.25) is still valid. The expand and contract operations are the final crucial steps that allow us to define a the m-contact-rise involution and prove Theorem 5.5. Before that, we need a last property to understand how the distance statistic behaves through the transformation. for 1 ≤ i ≤ n and 1 ≤ j < m. In other words, every value of the contact vector of (T, ℓ) is m times the corresponding value of the mcontact vector of I. Besides, as the expansion does not affect the initial forest ofĨ, we also have that every value of the rise vector of (T, ℓ) is m times the corresponding value in the m-rise vector of I. The grafting tree (T, ℓ) is rise-contact-m-divisible. This property is preserved through the β involution and if (T ′ , ℓ ′ ) = β(T, ℓ), then the contact vector of (T ′ , ℓ ′ ) is a permutation of the rise vector of (T, ℓ) and vice-versa. Because (T ′ , ℓ ′ ) is rise-contact-m-divisible, we can apply the contract operation and we get a m-interval-poset which corresponds to some interval J of T (m) n . The m-rise and m-contact vectors of J are computed respectively from the rise and contact vectors of (T ′ , ℓ ′ ) by dividing all values by m. This proves (5.34) and (5.35). For (5.36), see that the distance statistic is only affected by the expand and contract operations. The expand only applies an affine transformation which does not depend on the shape of T and is then reverted by the application of contract later on.
19,166
2018-02-22T00:00:00.000
[ "Mathematics" ]
Development of a CFD – LES model for the dynamic analysis of the DYNASTY natural circulation loop (cid:1) A LES model for the analysis of a natural circulation loop in presence of distributed heating is developed. (cid:1) The model takes into account the fluid and the solid regions, heat generation and conduction in the pipes. (cid:1) The approach is able to reproduce stable and unstable transients of DYNASTY. (cid:1) New information gathered as stratification and counter-current flows occurring during flow reversal. (cid:1) LES suitable to overcome RANS limitations in the stability and dynamic analysis of natural circulation systems. Natural circulation is exploited in nuclear systems to passively remove power in case of accident scenar- ios. In this regard, the DYNASTY experimental facility at Politecnico di Milano has been setup to increase the knowledge on single-phase, buoyancy-driven systems in the presence of distributed heating. In this paper, the development of a computational fluid dynamics (CFD) model of DYNASTY is presented, focus-ing on the capability of CFD to assess the dynamic behavior of the facility. The large eddy simulation (LES) model takes into account both the fluid and the solid regions, with heat generation and 3D heat conduction resolved in the pipe walls. The study, conducted using OpenFOAM, shows (i) the capability of repro- ducing stable and unstable transients of DYNASTY, (ii) new observations on the features of flow reversals during unstable transients, specific circulation systems. Introduction Natural circulation is the result of the presence of density gradients in a fluid system, induced by temperature differences, which generate convective motion as a result of the action of buoyancy forces. In nuclear engineering, the use of natural circulation to passively extract heat from, and increase the safety of, nuclear reactors has been documented and employed in several reactor designs of Gen IIIþ, such as the AP1000 (Sutharshan et al., 2011) and the ESBWR (Rassame et al., 2017). Due to the ever-increasing concerns for improving nuclear reactor safety, passive heat extraction systems are also major players in the development of Generation-IV reactors. This is particularly true in the case of the molten salt fast reactor (MSFR), due to the peculiar characteristic of its fuel, which is fluid and also acts as the reactor coolant. This feature introduces the opportunity during accident situations to take advantage of the fluid passively flowing by natural circulation inside the system and releasing its decay heat to external heat sinks, even in absence of any external action. However, such a design inherently relies on natural circulation being able to sustain a stable, high enough mass flow-rate inside the core to extract the required thermal power without overheating the system's components or operating in an unstable oscillating regime, which could also become dangerous for the integrity of the reactor. This latter aspect is of fundamental importance, given the well-known tendency of natural circulation loops to be subjected to flow instability and mass flow rate oscillations (IAEA, 2005). In particular, for these kinds of systems, the most frequent dynamic instabilities are density wave oscillations (Belblidia and Bratianu, 1979) that arise from the dependence of the buoyancy driving forces on the pressure losses and the density and temperature distribution in the fluid (Fig. 1). Because of this https://doi.org/10.1016/j.ces.2021.116520 0009-2509/Ó 2021 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). feedback mechanism, buoyancy-driven flows typically have an initial transient characterized by oscillations of the thermal hydraulic parameters of the system, before converging to a stable circulation state. However, for certain conditions occurring in the system, these oscillations can diverge with time (Misale, 2014), leading to limit situations such as flow reversal that can have drastic consequences on the system's reliability. Differently from the majority of the work available in the literature (Vijayan et al., 2007;Misale, 2010) an additional challenge when dealing with liquid-fuelled reactors like the molten salt reactor is caused by the heat source being distributed and the impact of this on the natural circulation stability (Krepel et al., 2014;Krepel et al., 2008). Natural circulation was first studied, both from an experimental and modelling standpoint, using natural circulation loops (NCL) (Zvirin, 1982;Greif, 1988). These facilities are composed of pipeli-nes looped in simple geometries, through which a fluid can flow because of the buoyancy force generated by the temperaturedriven differences in the fluid's density. Temperature distributions are achieved via local or distributed external heating/cooling applied to the pipes to introduce/extract heat from the fluid. These facilities are important as they provide the possibility to observe the behaviour of natural circulation flows in a controlled environment, as well as the dependence between the system's parameters and its stability, and in providing validation data for the numerous mathematical models that have been developed to predict the behaviour of NCLs. Once insight from simplified NCLs has been obtained, this can then be applied to more complex systems, obtaining a priori information on the desired system behaviour before it is physically built. In light of this, in order to support the development of the molten salt reactor, and in particular the adoption of passive decay heat removal systems in its design, as well as assisting the development of modelling approaches able to assess the dynamical behaviour of natural circulation systems, and the effectiveness of passive cooling strategies in the presence of distributed heat sources, the DYNASTY facility (Cauzzi, 2019) was built in the DYNAMO laboratories at Politecnico di Milano (Fig. 2). DYNASTY is a natural circulation loop that can operate under distributed heating conditions and with molten salt as the working fluid. Different modelling approaches are available to describe flow instability in natural circulation systems, according to the desired degree of fidelity required by the task at hand. Welander (1967) A. Battistini, A. Cammi, S. Lorenzi et al. Chemical Engineering Science 237 (2021) 116520 was the first to develop an approach based on a linear stability analysis in order to draw a stability map which is function of two parameters. Starting from this pioneering work, linear stability maps (Parks, 1992) have been used to predict the asymptotic stable or unstable behaviour of natural circulation systems. However, stability maps cannot provide any information on the time-dependent behaviour of the system. To overcome this issue, 1D system codes that describe the system in terms of mass, momentum and energy balance equations have been employed. With these, it is possible to study the time-dependent evolution of any unstable transient, assess the oscillation amplitude of the mass flow rate and the temperature and also predict flow reversals. On the other hand, a mono-dimensional approach still prevents the model from accounting for local three-dimensional phenomena that can be present in buoyancy-driven transients. Consequently, the presence of different flow regimes, flow reversal and laminar-to-turbulent transition mechanisms, radial temperature distribution and other 3D phenomena can be analyzed only with more advanced approaches such as computational fluid dynamics (CFD). CFD models can be a very useful tool in the analysis of the timedependent behaviour of natural circulation systems. However, proper modelling of turbulent phenomena in these systems is crucial due to the wide range of flow conditions characterizing them and the physical complexities often involved. Most of the CFD campaigns performed on natural circulation systems rely on the use of turbulence models used in conjunction with Reynolds-averaged formulation of the Navier-Stokes equations Luzzi et al., 2017). The choice of Reynolds-averaged Navier-Stokes (RANS) models for these studies -as for most common industrial flows -is mandated by the interest in averaged quantities in terms both of the velocity field and temperature distribution and justified by the limited computational burden of RANS relative to other approaches (Pini, 2017;Cauzzi, 2019). On the other hand, this choice has direct consequences for the possibility of the model correctly reproducing turbulent phenomena during unstable transients, where both turbulent and laminar conditions, and transition between the two, are found and flow reversal may occur. Previous studies indicate that RANS models, due to the ensemble average assumption they are built on, can damp the oscillations in the system, introducing a strong bias in the evaluation of both the asymptotic stability and the dynamics of the system (Cauzzi, 2019). In addition, other assumptions often made in RANS modelling, such as the isotropy of the turbulent viscosity and the proportionality between turbulent momentum and energy transfer, as well as the near-wall scaling laws commonly employed, may fail in natural circulation and in the presence of local mixing and stratification phenomena, limiting the accuracy of the CFD model (Krepper et al., 2002;Hanjalic, 2002;Choi and Kim, 2012). In view of this, more resolved approaches to modelling turbulent flows, such as large-eddy simulation (LES), offer the potential for more detailed predictions of the three-dimensional behaviour of the flow, even if at the cost of more computational resources (Rodi et al., 2013;Blocken, 2018). In LES, only the large scales of the turbulent motion are resolved, while the smallest scales are left to be modelled with an appropriate sub-grid scale model (SGS) (Rodi et al., 2013). In natural circulation flows, this approach should be able to account for the different regimes present in the flow, contributing fully in turbulent regions and becoming negligible in the case of re-laminarizations that are typically predicted during unstable transients (Luzzi et al., 2017;Cauzzi, 2019). In this work, a 3D CFD model of the DYNASTY facility is developed and LES is applied to study the dynamics of natural circulation in the system. The model includes conjugate heat transfer with the solid walls of the flow loop and the metal temperature dynamics and, to the best of the authors' knowledge, this is the first example of an LES model with conjugate heat transfer applied to the analysis of a buoyancydriven system having the length scale of the DYNASTY facility. The main aim of the work performed is to increase current knowledge of natural circulation in DYNASTY and improve understanding of the interactions between local three-dimensional phenomena and the global dynamic stability of natural circulation loops with distributed heating. Prior to this work, DYNASTY's dynamic stability was studied through various models of increasing fidelity, from stability maps, to 1D system models (DYMOLA (Dassault Systèmes, 2019)) and CFD simulations based on RANS modelling (Cauzzi, 2019). These simulations demonstrated the possibility for the system to produce both stable and unstable transients, depending on the system's setup parameters (amount of injected heat, heating distribution, cooling temperatures). Here, the work uses two previously analysed configurations which led to a stable and an unstable transient. These configurations are simulated with the LES model in order to highlight the limitations in previous RANS modelling results (i.e., the loss of local and three-dimensional resolution due to spatial averaging) and increase confidence in the LES model's ability to predict the stability of the selected DYNASTY configurations in advance of its operation with molten salt. Because of the innovative nature of the work, the pre-processing phase of the simulations focused on defining high quality computational grids (block-structured mesh), following the criteria present in A. Battistini, A. Cammi, S. Lorenzi et al. Chemical Engineering Science 237 (2021) 116520 Battistini et al. (2020) and recapped in Section 3.3, and historically required by LES (Rodi et al., 2013). Other improvements with respect to older simulation campaigns are related to the treatment of the heat source (analysed in Section 3.2) and the boundary conditions for the outlet of the system (reviewed in Section 3.2). The LES model adopted for this analysis employs the wall adapting linear eddy-viscosity (WALE) SGS model developed by Ducros et al. (1999) for modelling the filtered part of the turbulent motion. WALE is an algebraic (0 equation) eddy-viscosity model which has shown better performance in the literature than the standard Smagorinsky model (Ma et al., 2009). This was confirmed in a validation campaign using experimental data from a conventional NCL (L2, based at Università di Genova) by Battistini (2020), where the WALE model showed good properties and the ability to perform in alternating laminar and turbulent flow regimes without the excessive damping of the turbulent structures typical of the standard Smagorinksy model (Layton, 2016), a fundamental requirement for accurately predicting NCLs' behaviour. The paper is organized as follows. In Section 2, the DYNASTY facility is described with the main geometrical and thermophysical data. In Section 3, the mathematical model is presented, in addition to the simulation setup in terms of boundary conditions, modelling of the filling tank, initial conditions, and spatial and temporal discretizations. Section 4 presents the numerical results of a stable and unstable configuration, with a focus on the analysis on the flow reversal and on the temperature distributions. Some conclusions and future perspectives are outlined in Section 5. The experimental facility DYNASTY is a rectangular natural circulation loop composed of AISI-316 steel pipes (with thermo-physical properties in Table 1, nominal internal diameter of 38 mm and thickness of 2 mm) and dimensions reported in the schematic of Fig. 3. The loop can operate in either natural or forced circulation conditions by switching the lower horizontal leg (GO1) to an optional section (GO2) in which the flowmeter is substituted by a pump. For the remainder of this discussion, only the natural circulation configuration will be analysed. By design, every section except the top horizontal pipe is coiled with independently operated electric resistors (GV1, GV2, GO1, GO2 in Fig. 3), which allow heating of the loop in different configurations including the distributed one that is the main focus of this paper. The independently heated sections are chosen in a way so that vertical or horizontal localized heating configurations can be employed and allow the operation of DYNASTY as a conventional NCL. However, it is possible to regulate the power output of each section to achieve uniform distributed heating conditions to emulate the conditions arising in molten salt reactors 2 . The maximum power output of the heaters is 5.3 kW and the facility is designed to operate with molten salts. For the experimental campaigns, the mixture is mainly composed of sodium and potassium nitrites and nitrates (KNO 3 ; NaNO 2 ; NaNO 3 ), the thermophysical properties of which are reported in Table 2, with these values employed in the simulations. Nonetheless, DYNASTY can also be operated with water and water-glycol mixtures, the latter of which are effective in reproducing molten salt thermophysical properties whilst reducing solidification and high temperature problems encountered when using molten salts. Insulation is applied on the external surface of the loop's pipes, reducing thermal dispersions to a minimum, and an upper tank is installed for the loop filling and to serve as an expansion tank, whereas a lower tank is present to collect the salt at the end of the experiments. Experimental data collection is performed via thermocouples at 4 locations (shown in Fig. 3), close to the elbows of the loop, and with Coriolis mass flow meter Proline Promass F80 DN 25 in the middle of the lower horizontal leg. The cooling section, a 2.1 m portion of the upper horizontal pipe, is the only section without heaters or insulation. Instead, the outer shell is covered with copper fins to increase heat exchange and external forced convection cooling is provided by a fan placed underneath the pipe. DYNASTY numerical model In this work, both the fluid and the solid regions of DYNASTY are modelled (Fig. 4). The 3D computational model is taken to be as similar to the experimental facility as possible. This meant including in the model the same nominal dimensions (wall thickness, internal diameter, outer dimensions etc.) and thermo-mechanical properties reported in Section 2. The only deviations from the actual system are related to features that were believed to have limited impact on the outcome of the simulations. The discrepancies are limited to the bends' shape, which for simplicity purposes have all been modelled as circular elbows with an 0.1 m radius (instead of T junctions as in the experimental facility), the omission of the lower forced circulation branch of the facility (which is closed Table 1 AISI-316 Steel thermophysical properties (Incropera et al., 2002). . DYNASTY schematic (lengths in mm), the thermocouples can be seen at the inlet (T1) and outlet (T2) of the cooler section and before (T3) and after (T4) the lower horizontal pipe in the simulations and therefore does not have any effect on the results), and the positioning of the expansion tank, which is offset from its real location by 14 cm. The latter feature is a solution adopted to simplify the geometric construction of the CAD model by minimising the number of T-junctions in the structure. Indeed, as a mesh block structure was used, automatic meshing algorithms could not easily discretise penetrating geometries, therefore 4 circular bends were modelled, and only a T-junction has been modelled manually to include the tank (see Fig. 5). The modelling of the solid walls is particularly important because it allows account to be taken of the effect of the thermal inertia of the walls on the dynamic stability of the system, as well as producing information on the operational temperatures reached in the metal. In previous simulations focused on conventional NCLs (i.e., with localized heating and cooling), the inclusion of solid regions, rather than using simplified fixed temperature or wall heat flux boundary conditions on the inside wall, was found to have a dampening effect on instability phenomena (Pini, 2017). In the fluid domain, the filtered, weakly-compressible Navier-Stokes equations (Eqs. (1)), with gravity forcing 3 , coupled to the total energy transport equation (Eq. (3)) (Vreman et al., 1994), are solved at each time-step. The density-dependent formulation is used to correctly model the relationship between the fluid motion and the buoyancy effects caused by the density differences induced by the temperature gradients in the molten salt. A second order implicit Euler method has been used for temporal discretisation, and a second order linear-upwind scheme for both momentum and energy convection. The remaining contributions to the momentum and energy equations have been discretised with second order linear schemes. The filtering operation is required by the LES model and filters out the fluctuations at the smallest scale, which would require a very fine discretisation to be correctly resolved (Rodi et al., 2013). Instead, the impact of these on the resolved flow is modelled with a SGS model and the contribution is added to the shear stress tensor: Here, the overbar denotes a filter operation and tilde the explicit Favre-filtering operation (Erlebacher et al., 1992), defined as Þ; u, and p, which are the temperature-dependent density, velocity and pressure, respectively. Additionally, e tot is the total energy and s ij the viscous stress tensor, defined as: Finally, s SGS ij is the SGS contribution, discussed in more details in the next section. For the solid regions only a heat diffusion equation is solved. The equations are solved with a segregated approach with the OpenFOAM solver chtMultiRegionFoam (The OpenFOAM Foundation Ltd., 2018) for each region of the system, and adjacent solid-liquid regions interact via coupled boundary conditions. The solution is based on a combination of the PISO (Pressure Implicit with Splitting of Operator) algorithm (Issa, 1986), a predictor-corrector approach able to simulate transient behaviours with large time-steps, and the SIMPLE (Semi-Implicit Method for Pressure- Linked Equations) algorithm (Spalding, 1972), devised to retrieve steady state solutions of the Navier-Stokes equations. The result is the PIMPLE algorithm (The OpenFOAM Foundation Ltd., 2018), a transient solver with improved characteristics in terms of the numerical stability of the solution, due to the iteration of the PISO algorithm -non-iterative in origin -until convergence of the solution is achieved at each time-step. Sub-grid scale turbulence model Because unstable transients in natural circulation loops often involve alternating laminar and turbulent regimes and complex local thermal hydraulic conditions (Luzzi et al., 2017), RANS models have been found to struggle in predicting the global behaviour of the system and the velocity and temperature distributions. Whilst RANS models allow relatively fast calculations of even complex systems by solving the Navier-Stokes equations for the mean flow only, they entirely rely on how accurately turbulent phenomena are modelled by the selected turbulence model Þ . The LES approach, instead, by resolving the largest scales of turbulent fluid motion, which are the most difficult to model due to their anisotropy, and the majority of the turbulent structures (Rodi et al., 2013), has the potential to better capture the relationship between the thermal-hydraulics of the system and its stability. The effect of the smallest scale fluctuations, i.e. the unresolved part of the turbulence spectrum, are incorporated in the resolved flow computations using an appropriate SGS model (Xu, 2003). Due to the relative simplicity of its formulation, the LES eddy-viscosity based WALE SGS model (Ducros et al., 1999) has been selected. Similarly to RANS eddy-viscosity models, the sub-grid contribution is added to the shear stress by adding a sub-grid term: where m t is the SGS turbulent viscosity. The WALE model (Ducros et al., 1999) has shown promising results for several applications (Weickert et al., 2010;Kamali-Moghadam et al., 2016), and similarly to the more well-known Smagorinsky model (Smagorinsky, 1963), calculates the turbulent kinematic viscosity from the filter width D (defined as the cube root of the volume of the cells) and the resolved strain rate tensor b S ij (Eq. (6)) and its deviatoric part b S d ij (Eq. (7) The final formulation for the turbulent viscosity is: This formulation results in an improved ability to represent the alternating turbulence regimes typical of natural circulation flows, and the contribution given by the SGS model when the flow is laminar is correctly dampened and tends to zero, differently from the standard Smagorinsky model, which has been found to overdampen the turbulent oscillations (Layton, 2016). On account of the properties of the WALE model, wall-damping functions have not been adopted and the flow is wall-resolved keeping the y þ in the first cell from the walls below unity. Boundary and initial conditions Due to the presence of a solid pipe wall, the boundary conditions for the fluid are limited to the interface boundary conditions between the molten salt and the solid wall and the upper outlet section. The fluid/solid interface has been modelled following the conjugated heat transfer framework, by applying continuity conditions for the temperature (Eq. (9)) and the heat flux (Eq. (10)) (proportional to the temperature gradients) between adjacent domains: For the fluid velocity, the no slip condition is applied at the solid wall, while a zero gradient condition is enforced on the pressure. The modelling of the upper tank (Fig. 4) -which serves as fluid outlet and pressure boundary -has been found to be one of the most critical aspects for modelling the DYNASTY system behaviour, as will be shown in the results section. Due to the density dependence of the compressible NS equations, the fluid expands in the facility when it is heated. Therefore, especially in the first part of a transient, the model has to allow for the exit of some of the fluid, but at the same time back-flows may be also possible should reductions in the temperature occur during the unstable transients. A set of different simulation setups (reported in Table 3) has been tested, to reveal the effect the modelling choice for this outlet has on the stability of the system. The first of the adopted conditions ("Open" case) allows the spillage of fluid, while the back-flow velocity is calculated from the flux at the outlet surface in the previous time-step. As most CFD literature suggests, pressure is arbitrarily fixed at the boundary surface, to have a reference value from which the Navier-Stokes equations can be solved. With regards to the temperature, its value at the boundary has been treated as adiabatic for out-flows and the following Robin boundary condition has been applied for back-flows: where the value of h ext is taken as 5.23 W m À2 K À1 , the result of natural convection calculations based on an external air temperature T ext ¼ 20 C. This condition allows some heat dissipation to the surrounding domain, a more moderate condition with respect to the commonly adopted inletOutlet which fixes the temperature for back-flows. A second, less-physical and more ideal condition ("Closed" configuration) has also been used. In this, no out-flows or back-flows are allowed across the outlet, and the solver compensates for any change in density by correcting the amount of mass in the system. The idea of using a closed configuration is taken in accordance with previous analyses and other examples available in the literature Cauzzi, 2019;Pini, 2017). In particular, a closed configuration is usually imposed in analytical and numerical 1D models, often employed in the analysis of the stability of natural circulation loops, and numerous works on flow instability have employed the Boussinesq approximation (Krishnani and Basu, 2016), where density changes with temperature are only accounted for in the gravitational term of the momentum equation. To maintain the possibility for adding heat losses around the heated sections as external boundary conditions to the solid domain, the power source (given by resistance wires coiled around the pipes in the experimental facility) is applied using volumetric sources in the heat diffusion equation of the solid domain. The value of such sources (Eqs. (12) and (13)) is calculated in order to uniformly provide 5.3 kW (or 1 kW for the stable transient) of thermal power in GO1, GV1 and GV2, specified in Fig. 3: In the present work, however, it is assumed that due to the insulation applied to the solid wall the losses are negligible, and an adiabatic conditions is imposed in the heated sections between the heated solid walls and the surroundings. On the other hand, the boundary conditions for the cooled section of the wall are given as a fixed temperature, which is T cool ¼ 180 C for the stable configuration and T cool ¼ 240 C for the unstable configuration. This choice has been extracted from previous RANS simulations of the DYNASTY system for the SAMOFAR project (Safety assessment of the Molten Salt Fast Reactor), one of the Research and Innovation projects in the Horizon 2020 Euratom research programme, focused on the demonstration of the key safety features of the MSFR Cauzzi, 2019). In future works, this condition could be improved to a more realistic one, which should take into account the forced convection of air from the fan positioned below the cooler pipe. The initial conditions for the entire domain match those imposed in previous RANS simulations of DYNASTY (Pini, 2017;Cauzzi, 2019), and the start-up conditions for previous experimental analyses of similar natural circulation loops (i.e. L2, Università di Genova) (Misale, 2014;Luzzi et al., 2017). The fluid is considered initially at rest with a uniform temperature equal to the temperature of the external wall of the cooler (180 C or 240 C). Uniform pressure is also imposed, with this initial condition not showing any effect on the progression of the transients, and the pressure distribution across the facility quickly adapts to the outlet boundary value and the gravitational force (lower on top, higher at the bottom). Computational grid and time discretisation One of the critical aspects of LES when compared with RANS is the stricter requirement on spatial and temporal resolution. Since the majority of dynamical and three-dimensional turbulent structures are resolved in LES, the computational grid needs to be sufficiently refined for these structures to be properly captured. However, given DYNASTY's physical dimensions (12 m length, 38 mm diameter) and the great number of finite volume cells % 3 Á 10 6 cells needed to discretise the whole geometry, much care should be taken in building a computational grid with the optimum amount of elements. When using RANS, there is the necessity of reaching so called "mesh independence", i.e., the level of spatial refinement above which the results of the simulation for the same setup do not change on further refinement. In LES instead, any further mesh refinement leads to the resolution of additional turbulent structures, until direct numerical simulation (DNS) resolution is reached. Therefore, the resolution necessary to resolve the desired turbulent structures needs to be determined. In Battistini et al. (2020), a preliminary analysis of the dependence of LES accuracy on geometrical meshing in cylindrical geometries in low-Reynolds number flows was carried out. Different mesh construction approaches were tested, varying the linear step in the three cylindrical coordinates (q; h; z). The mentioned work determined guidelines on the choice of refinement required whilst constraining errors in the prediction of the pressure losses, turbulence kinetic energy and resolved shear stresses to reasonable values. These criteria have been used in the present work in the construction of the mesh. Adapting the refinement criteria found in the pipe flow analysis and applying it in nondimensional linear steps to the present geometry, average flow parameters and the reference shear velocity typical of DYNASTY's transients have to be determined: To do so, a preliminary RANS simulation (the results of which are omitted here for brevity but can be found in Battistini (2020)) has been carried out on an preliminary mesh of DYNASTY, adopting a block-structured version of the calculation grid and refinement level following the choices found in Cauzzi (2019). To retrieve a reference value of the friction velocity, the results in El Khoury et al. (2013) can be used to relate the u bulk;avg in the system to the friction velocity u s;avg : u s ¼ 0:016359 Á u 3 bulk À 0:03529 Á u 2 bulk þ 0:071989 Á u bulk þ 4:9 Á 10 À5 ð15Þ The u bulk was estimated from the mass flow rate retrieved from the preliminary RANS simulation and equal to _ m avg % 0:2 kg s À1 : Average values of the kinematic viscosity and the density in the system were determined from the mean value of the temperature from the RANS simulation and a summary of all the average quantities used for the conversion of the geometrical mesh parameters is reported in Table 4. Using the shear velocity estimated from RANS (Battistini, 2020), and the dimensionless refinement criteria established in Battistini et al. (2020), such as the Dz þ lower than 15 and the Dr þ lower than 1 at the wall required for wall-resolved LES, the criteria used to build the DYNASTY mesh for the unstable configuration in cylindrical coordinates have been retrieved and they are reported in Table 5. Visual representations of the computational grid are shown in Figs. 6 and 7. The computational grid has % 3:5 Á 10 6 hexahedral finite volumes and is a good compromise between accuracy and computational requirements. Despite this, around 3 months of simulation time on an average number of 70 cores (% 8 Á 10 4 CPU-hours) were required to simulate a typical transient in DYNASTY. Indeed, preliminary experimental tests have shown that DYNASTY's transient time-spans are in the order of 10 3 À 10 4 s (Pini, 2017;Cauzzi, 2019), and because of the geometric mesh requirements of the LES approach, care has also been taken for the choice of the time-step, which impacts computational resources as well as the numerical stability of the solution (Versteeg et al., 1995). To eliminate the dependence of numerical stability on the time-step, an implicit scheme has been used to discretize the Navier-Stokes equations. However, in LES models the Courant-Friedrichs-Lewy (CFL) condition (Eq. (17)), which expresses the speed at which information can propagate across the calculation domain (Courant et al., 1928), needs to be properly enforced in order to ensure the correct propagation of the turbulent structures and avoid any loss of information (Lau et al., 2012). Therefore, the time step has been controlled to achieve a CFL lower than 1 in the entire loop for the duration of the transient: This meant keeping the time-step in a range between 5 Á 10 À3 s and 10 À2 s, depending on the fluid's average velocity across the entire facility, as per (Eq. (17)). Results and Discussion In this section, the outcomes of two LES simulations of the DYNASTY facility are reported. The power and cooler temperature configurations for the two transients 1 kW; ð 180 C À 5:3 kW; 240 CÞ were extracted from Cammi et al. (2019). The two configurations have consistently been found stable and unstable, respectively, using different modelling approaches (stability maps, DYMOLA and CFD). The stable transient 1 kW; 180 C ð Þ shows the convergence of global parameters (such as mass flow-rates, pressure drops, temperature distributions) to steady-state values, whereas the unstable transient 5:3 kW; 240 C ð Þ is characterised by an oscillating mass flow rate that diverges with time until a periodic flow reversal with characteristic frequencies of oscillation is established. Using predictions from previous analyses, this first configuration was tested. As reported in Fig. 8, the LES results show an initial oscillating transient that eventually converges to a steady state mass flow-rate similar to previous RANS predictions . The main differences from previous results are first the flow direction predicted in the system, which may be linked to the different propagation of perturbations that eventually lead the essentially symmetrical system (except for the outlet tank) to unidirectional flow. Moreover, the temporal behaviour of the flow rate is different, as the initial oscillations start earlier with LES for both treatments of the outlet boundary, with oscillations starting first in the Open case. Secondly, when the major part of the oscillatory transient has dampened, the fluctuations of the parameters do not reduce as much as with the RANS-modelled transient, which is to be expected as the solution of the Navier-Stokes equations is calculated from filtered instead of averaged flow variables. Nonetheless, predictions for the stability of the present configuration are in a general agreement with previous results, with the statistical steady state value of the mass flow-rate slightly increased with respect to its RANS counterpart % þ10% ð Þ . This could be Table 4 Average parameters from preliminary RANS simulation (Battistini, 2020 A. Battistini, A. Cammi, S. Lorenzi et al. Chemical Engineering Science 237 (2021) 116520 related to the evidence found in Battistini (2020), where LES predicted higher pressure losses with respect to RANS simulations. Increased pressure losses lead to increased feedback of counterforces in the loop and therefore lower flow rates and a system less prone to instability, given the same buoyancy force. Following Cammi et al. (2019), an increase in the power injected in the system and a higher cooler external temperature should favour the unstable behaviour of the facility. However, 1D model predictions showed a completely different transient (Cauzzi, 2019) and stable operating conditions. Therefore, a further analysis with LES at these operating conditions has been deemed necessary to shed some light on the behaviour of the facility in such conditions. The same set of configurations used for the stable transient (reported in Table 3) were employed and a significant influence of the modelling of the outlet section, which is discussed in detail in the next section below, was found. Effects of the outlet tank boundary conditions Differently from the previous stable transient, the treatment of the outlet boundary condition was found to have a major impact on the results and the stability of the system in the P ¼ 5:3 kW; T cool ¼ 240 C configuration. Fig. 9 shows how the behaviour of the system is different depending on the boundary condition at the outlet. Specifically, the free spillage and backflow condition, coupled to the Robin condition on the fluid temperature (Open LES configuration in Table 3 and Fig. 9), which more closely simulates the entire physics of DYNASTY, including the expansion tank and its mutual interaction with the loop, tends to stabilise the mass flow rate to the value observed in the preliminary RANS simulations ( _ m ss % 0:2 kg s À1 ), made using the same settings employed for the computational grid analysis (Section 3.3) and also included in Fig. 9. In contrast the closed outlet, coupled to an adiabatic treatment of the temperature (Closed LES configuration in Table 3 and Fig. 9), which models a more ideal behaviour of the facility without the real dynamics of the fluid expansion and the related inflow/outflow, shows an oscillatory behaviour that increases with time, until periodic flow reversals are reached and maintained throughout the transient. The different behaviour observed with the two boundary treatments can be related to the role of the expansion tank acting as an accumulator, which partially absorbs the alternated hot and cold plugs that are present in unstable conditions (described in more details in the following sections), dampening flow oscillations in space and time. In Fig. 9, the Open LES transient shows some oscillations at the beginning of the transient, but the presence of the tank is sufficient to damp these oscillations and drive the flow rate to an almost stable steadystate value. Instead, without the tank (Closed LES), these initial oscillations are sufficient to trigger the instability and eventually reaching flow reversal. Clearly, if the tank is included, the correct behaviour of the facility can only be predicted if its physical behaviour, and effect on the system stability, are properly modelled. Overall, therefore, the LES model is able to detect and handle an unstable system operation, but the accurate modelling of the outlet section and the upper tank, as well as its impact on the correct prediction of the stability boundary, will need to be carefully addressed in future studies at the end of the experimental campaign, when experimental data will be available. It is also important to note how the RANS simulation predicts a stable system even without the presence of the expansion tank (Closed RANS in Fig. 9). Therefore, the additional turbulent viscosity introduced by the RANS model seems to have a stabilizing effect on the system behaviour. This is shown also near the end of the transient, where the flow rate reaches a stable value and does not show even the small oscillations found in the Open LES system. Flow reversal analysis It is interesting to look in more detail at an integrated-value quantity such as the mass flow rate _ m during the unstable transient. This shows continuous oscillations with alternating positive and negative peaks and zero-flow conditions between the peaks. This behaviour has already been observed in previous works Luzzi et al., 2017), and it was characterised by the alternate presence of different flow regimes, from fully turbulent to laminar. However, examining the LES predictions during and the result of the LES simulations (uniform heating at 1kW and cooler temperature at 180°C). A. Battistini, A. Cammi, S. Lorenzi et al. Chemical Engineering Science 237 (2021) 116520 these reversals, it is observed how a purely laminar flow is never established during the unstable transient. Instead, countercurrent, superimposed flows in the horizontal sections of the facility and the coexistence of oppositely directed flows in the vertical sections are observed with LES, as shown in Fig. 10. This effect cannot be captured in 1D models, with the resolution of the local flow conditions being beyond the resolution capabilities of such approaches. At the same time, in RANS CFD details will be lost due to the averaging procedure, and the accurate modelling of complex phenomena such as counter-current buoyancy-driven effects in the transition and turbulent regions is particularly challenging. Instead, these same phenomena are mostly resolved in LES, which therefore proves to be extremely valuable for their detailed analysis and understanding. Furthermore, differently from conventional natural circulation systems, heating is distributed throughout the entire facility in DYNASTY (except for the cooling section). Therefore, in the vertical section shown in Fig. 10, the fluid particles closer to the vertical wall during flow reversal tend to be hotter, and they are pushed upwards by buoyancy forces, in opposition to the downward flow in the colder central region of the pipe. Consequently, during the inversion phase, the motion of the fluid is mainly driven by the local temperature distribution, because of the lower inertia of the bulk motion. The countercurrent circulation of a central descending region of colder fluid and a cylindrical boundary of ascending hotter fluid generates turbulence, proven by the chaotic directions of the fluid particles at the boundary between ascending and descending flow in Fig. 10. This analysis shows how, during flow reversal, the stratification and the coexistence of counter-current flows occurs, more notably in the horizontal sections of the facility but with some repercussions in the vertical legs. These stratified flows, if integrated, result in low mass flow rates, although the local flow is far from zero. An analysis based on stability maps, built on average dimensionless parameters such as the Reynolds (Re) number or the Grashof (Gr) number, or on 1D system codes such as RELAP5 3D (Idaho National Laboratory, 2015) or DYMOLA (Dassault Systèmes, 2019), both using as input the low integral mass flow rate, would not detect this phenomenon. Therefore, global stability may be empirically predicted, but an important physical aspect of the facil-ity's behaviour, as well as its impact on the stability map, is entirely neglected. Effect of instability on the temperature distribution Dynamic instabilities are dangerous because, due to the mass flow oscillations, hot and cold plugs of fluid circulate around the loop, as already mentioned and described in Welander (1967). These may result in large oscillations of the operational quantities, which may exceed their design limits, with consequent over-stress and eventually damage to the facility. This effect is clearly noticeable in the temperature distribution within the facility during the transient. The adiabatic mixing temperature within the loop in the stable transient, plotted in Fig. 11, shows a stationary distribution once a steady-state condition is reached (hence the limited discrepancy between the two times). On the other hand, in the case of an unstable transient (Fig. 12), the adiabatic mixing temperature is fluctuating both spatially and temporally, proving the presence of locally hot and cold plugs of fluid travelling along the facility. These plugs are local regions of the fluid in which the density is consistently different from the remaining flow. Therefore, the plugs cause buoyancy counter-forces against the bulk circulation of the fluid. Eventually, the joint effect of pressure losses and these counter-forces may cause flow reversal. In addition, the hot plugs are not sufficiently cooled down by exchanging heat in the cooling section, because of their limited residence time, and, especially in uniformly heated fluids, they tend to increase their temperature while travelling in the remaining sections of the loops, more than in the corresponding stable case. Most importantly, the effect of the dynamic instability can also be observed when sampling the temperature of the solid wall. As shown in Fig. 13, the oscillating behaviour of the mass flow-rate compromises the heat extraction, inducing higher temperatures and therefore increasing the probability of damage to the structural materials. Buoyancy effect on the temperature distribution It is also interesting to focus on the behaviour of the fluid's temperature distribution on the pipe cross-section in the cooler por- Fig. 10. Flow reversal behaviour. Fig. 11. Adiabatic mixing temperature along the full length of the facility in the Open LES -P ¼ 5:3 kW; T cool ¼ 240 C configuration. (Start point: at the right of the cooler and then proceeding clockwise, through the heated sections first). A. Battistini, A. Cammi, S. Lorenzi et al. Chemical Engineering Science 237 (2021) 116520 tion of the loop, which shows an increasing asymmetry along the cooler's length (see Figs. 14 and 15). The asymmetry presents itself as a stratification of layers of fluid with different temperatures, and this was also noticed in previous works on DYNASTY for stable transients at low power (1-2 kW). Here, the stable results obtained in the Open configuration also allow examination of the same effect at the maximum power input (5.3 kW). As shown in Fig. 15, a layer of cold fluid builds-up in the lower part of the pipe. The cold particles have lower inertia, and at the same time are heavier than the adjacent hot particles. The concurrence of these two characteristics generates the observed distribution, which increases in asymmetry along the cooling section. This can also be observed by looking at the contours of velocity entering and exiting the cooler section (Fig. 16), and streamlines of the fluid A. Battistini, A. Cammi, S. Lorenzi et al. Chemical Engineering Science 237 (2021) 116520 particles along it (Fig. 17), where those in the layer near the wall are cooled and recirculate down along the walls and accumulate at the bottom of the pipe. Due to the longer trajectory travelled by the colder particles, their residence time in contact with the cold wall is prolonged, leading to lower velocities. This phenomenon needs to be addressed in future works, as it may add uncertainty to the evaluation of the cooling capabilities of the system. Conclusions The present paper focused on the development of a singlephase CFD model, including the modelling of the solid walls, for the study of natural circulation dynamics with a distributed heat source. The analysis focused on the DYNASTY facility, a natural circulation loop built at the Energy Laboratories of Politecnico di Milano. In this model, with the aim of increasing the ability to predict the physical behaviour of the system and overcoming some of the limitations of RANS models applied in previous works, LES is employed. Due to the high computational cost of LES, this study focused on a small number of configurations previously associated with stable and unstable conditions . This was undertaken to confirm previous results and at the same time highlight additional physical insight into the facility's behaviour an LES is able to provide. The results have confirmed previous predictions for a stable transient in a low power configuration (1 kW, cooler temperature = 180 C). For a purportedly unstable configuration (5.3 kW, cooler temperature = 240 C), an unstable transient was obtained using a closed boundary condition that does not allow any fluid to leave the facility, an ideal condition usually employed in 1D calculations. However, a sensitivity analysis has underlined a strong dependence on this outlet boundary condition. When a more physical open outlet boundary condition was introduced (i.e., the flow is allowed to exit and re-enter the loop following changes in the density), together with the modelling of the salt filling tank, a stable system was observed. Additional studies are therefore necessary on how to properly model this aspect of the facility and its impact, and the limitations introduced by the closed approximation, on the correct prediction of the stability boundary, and these will be carried out once the experimental studies have been completed. Important local phenomena were observed. During unstable transients, flow reversals were shown to occur with stratified counter-current flows in the horizontal cooled and heated pipes, which, when integrated, result in very low mass flow-rates. In the vertical sections, instead, a radial stratification occurs with a distinct annular upflow region around the heated wall, in opposition to the core downflow region in the centre of the pipe. This observation improves previous understanding, mostly related to re-laminarisation occurring during the flow reversal, driven by the very low measured flow-rates obtained when counter-current flows are integrated. Analysis of the temperature distributions for both stable and unstable transients confirmed the presence of hot and cold fluid plugs travelling along the facility during unstable transients, contributing to destabilisation of the flow. The alternation of clockwise and counterclockwise circulation typical of unstable transients causes the deterioration of the heat removal capabilities of the system and a distinct rise (10-15 C) in the temperature of the solid walls, detected by including conjugate heat transfer in the CFD model. Results also showed an increasingly asymmetrical temperature distribution in the horizontal cooling section, leading to thermal stratification in the pipe cross-section. This aspect of the flow and its impact on the system's heat removal capabilities will need to be further investigated, and the availability of a reliable numerical tool which includes the modelling of the solid walls, will def-initely help in supporting and corroborating experimental evidence. In addition to the modelling of the filling tank and the related boundary condition, which should be analysed in depth including more appropriate designs such as a free surface on top of the system with a movable mesh interface (ALE), further development of the model should also focus on increasing the fidelity of the boundary condition used to model the cooling fan. Given the computational costs of simulations, mesh requirements were derived in a simplified geometry, in flow conditions as close as possible to the DYNASTY facility. Therefore, additional sensitivity on the mesh resolution will also be necessary, including comparisons with more refined simulations and a specific focus on the thermal field resolution, considering the high Prandtl number of the molten salt employed. In this regard, the requirement of an ad hoc model for the sub-grid heat fluxes, in conditions where some commonly adopted assumptions such as the gradient diffusion hypothesis or the Reynolds analogy are expected to fail, will be worth investigating. Finally, the validation of the model, and its quantitative predictions of the stability boundary, will be a priority once experimental data from DYNASTY become available. Once validated, the model, by providing detailed insight on the natural circulation behaviour inside the loop, will support the development of passive heat removal systems for the molten salt reactor. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
11,547.2
2021-02-18T00:00:00.000
[ "Engineering", "Environmental Science", "Physics" ]
Impact of Oral Administration of Lactiplantibacillus plantarum Strain CNCM I−4459 on Obesity Induced by High-Fat Diet in Mice Recent evidence suggests that some lactobacilli strains, particularly Lactiplantibacillus plantarum, have a beneficial effect on obesity-associated syndromes. Several studies have investigated probiotic challenges in models of high-fat diet (HFD)-induced obesity, specifically with respect to its impact on hepatic and/or adipocyte metabolism, gut inflammation and epithelial barrier integrity, and microbiota composition. However, only a few studies have combined these aspects to generate a global understanding of how probiotics exert their protective effects. Here, we used the probiotic strain L. plantarum CNCM I−4459 and explored its impact on a mouse model of HFD-induced obesity. Briefly, mice were administered 1 × 109 CFUs/day and fed HFD for 12 weeks. Treatment with this strain improved insulin sensitivity by lowering serum levels of fasting glucose and fructosamine. Administration of the probiotic also affected the transport and metabolism of glucose, resulting in the downregulation of the hepatic Glut-4 and G6pase genes. Additionally, L. plantarum CNCM I−4459 promoted a decreased concentration of LDL-c and modulated hepatic lipid metabolism (downregulation of Fasn, Plin, and Cpt1α genes). Probiotic treatment also restored HFD-disrupted intestinal microbial composition by increasing microbial diversity and lowering the ratio of Firmicutes to Bacteroidetes. In conclusion, this probiotic strain represents a potential approach for at least partial restoration of the glucose sensitivity and lipid disruption that is associated with obesity. Introduction Non-communicable diseases (NCDs) are the leading cause of mortality worldwide and are favored by a combination of genetics and lifestyle factors.One such NCD is obesity, which correlates with several metabolic syndromes such as insulin resistance, type-2 diabetes, and certain cancers (such as colorectal cancer).Obesity and overweight correspond to an excess of fat accumulation, mainly caused by a prolonged imbalance between energy intake and energy expenditure (World Health Organization).Several factors, such as genetic and environmental ones, influence obesity, as well as a Western diet (rich in simple sugars and fat and poor in fiber), which has been recognized as an "obesogenic" diet.In addition, obesity and metabolic syndrome are characterized by altered gut microbiota, inflammation, and barrier dysfunction [1][2][3][4].Obesity affects around 13% of the adult population worldwide and has a high cost of treatment, making it an important public health issue (World Health Organization).At the level of individuals, nutritional interventions are typically the preferred strategy for preventing obesity.Several studies have reported the impact of diet on gut microbiota dysbiosis [2,5] and the means by which this can affect obesity.Our increased understanding of the numerous ways in which the Western diet, human biology, and gut microbiota interact has opened new possibilities for the use of probiotics as a therapeutic strategy.Probiotics are live microorganisms that, when administered in adequate amounts, confer a health benefit on the host (Food and Agriculture Organization of the United Nations [6]).The most studied probiotic strains belong to the group of lactic acid bacteria (LAB).Several health benefits have been attributed to LAB (mainly lactobacilli strains), including the regulation of the host immune response and the epithelial barrier homeostasis, as well as the modulation of the gut microbiota and metabolic functions [7][8][9][10][11].In the last two decades, different strains of Lactobacillus spp.have been extensively studied as an alternative treatment for intestinal inflammation (e.g., chemically induced colitis and colorectal cancer) [12][13][14], intestinal hyper-permeability [15], or even metabolic disorders induced by HFD [16,17].In addition, several studies have reported specific beneficial health effects of different strains of Lactobacillus spp. on the control of body weight, glucose tolerance, and hyperlipidemia [18,19].In particular, strains of Lactiplantibacillus plantarum have been shown to help reduce obesity and ameliorate metabolic syndromes in mouse models of HFD-induced obesity, thus representing good candidates for obesity prevention strategies [16,20,21].In addition, a controlled, randomized, double-blind trial demonstrated that L. plantarum strains had a beneficial effect in lowering cholesterol levels [22].These studies have highlighted the pivotal role that the liver and adipocytes play in lipid metabolism to prevent HFD-induced obesity.However, although microbiota dysbiosis is generally altered in metabolic diseases (10.1136/gutjnl-2020-323071, 10.1186/s13073-016-0303-2), only a few addressed how L. plantarum strains impact microbiota in HFD mice model [23,24] and the mechanisms underlying the beneficial effects mediated by L. plantarum are then still poorly understood. L. plantarum strain CNCM I−4459 is known to maintain epithelial intestinal integrity in a mouse model of colitis (IL10-deficient) [25].In addition, based on a preliminary study, the strain has been shown to activate in vitro (i.e., a cellular model using Hutu 80 cells) the expression of Pyy, a gene with an effect on satiety and metabolism (Supplementary Figure S1).We thus decided to explore the impact of strain CNCM I−4459 in a model of HFD-induced obesity in mice. Bacterial Strain and Growth Conditions Lactiplantibacillus plantarum CGMCC No.1258 (L.plantarum CNCM I−4459) (CGMCC No.1258, Novanat, Shanghai, China) was isolated from the feces of a healthy child and kindly provided by Indigo Therapeutics.It was grown in MRS (Man Rogosa and Sharpe, Difco, Le-Pont-de-Claix, France) medium at 37 • C overnight in aerobic conditions.To prepare the live bacterial inoculum, bacteria were washed two times with PBS and spun down at 3000 g, and the pellet was suspended in PBS to a final concentration of 5 × 10 9 colony-forming units (CFUs)/mL in PBS with 15% glycerol. Animal and Experimental Design Male C57BL/6J mice (6-8 weeks old; Janvier SAS, St Berthevin, France) were maintained at INRAE animal facilities (4 mice per cage).Mice were assigned to three groups: two groups of mice (n = 16) were fed a high-fat diet (HFD; 60 Kj % fat, Ssniff, Soest, Germany), and one group (n = 8) received a control diet (CD) (13 Kj % fat, Ssniff) for twelve weeks (diet composition is described in Supplementary Figure S2).The assay was performed in two independent experiments in the same conditions.Weight and food intake (g/cage) were measured once a week throughout the experiment.The food efficiency ratio (FER) was calculated as follows: body weight gain/calorie intake.Once a day, mice were orally administered either 1 × 10 9 CFUs of L. plantarum or PBS.Mice were euthanized by cervical dislocation on day 84.Colon, ileum, liver, adipose tissues, and sera were collected and stored under conditions appropriate for further analyses.Feces were collected in the morning, frozen in nitrogen immediately after collection, and stored at −80 • C before processing.All animal experiments were approved by the local INRAE ethics committee and the French Ministry of Research (approval 2015070115416973). Oral Glucose Tolerance Test (OGTT) Mice were fasted for 6 h (by removal of food and bedding) before OGTT analysis.Glucose solution (2 g/Kg) was orally administered.Blood glucose levels were measured at time 0 (before glucose gavage) and 15, 30, 60, and 120 min after glucose gavage using a One Touch glucometer (Roche, Meylan, France).The area under the curve (AUC) was calculated following the trapezoidal rule.Insulin levels were detected using a Mouse Ultrasensitive Insulin Elisa (Alpco, Salem, NH, USA) at T0 and T30 min.The homeostatic model assessment of insulin resistance (HOMA IR) was calculated according to the formula: (fasting glucose (T0) [mg/dL] × fasting insulin (T0) [ng/mL])/405. Microbial DNA Extraction and Amplification DNA was extracted from stool as previously described in Lamas et al. [26].The resulting DNA pellet was washed with 70% ethanol, dried, and resuspended in 50 µL of Tris-EDTA (TE) buffer.DNA suspensions were stored at −20 • C until amplification. A 16S rDNA amplicon library was sequenced in the Surette lab and the Farncombe Metagenomics Facility on a MiSeq machine using the 2 × 250 bp V3 kit.Any remaining adapter/primer sequences were trimmed, and reads were checked for quality (≥30) and length (≥200 bp) using cutadapt [28].Reads were further corrected for known sequencing errors using SPAdes [29] and then merged using PEAR [30].A total of 3,259,918 reads was produced, with an average of 83,588 ± 15,247 reads per sample.Sequencing data are deposited in NCBI under the accession number PRJNA663256. OTUs were identified using a Vsearch pipeline [31] designed to dereplicate (-derep_ prefix-minuquesize 2) and cluster (-unoise3) the merged reads, as well as check for chimeras (uchime3_denovo).Taxonomic classification of OTUs was performed using the classifier from the RDPTools suite [32].Representative OTU sequences were taxonomically assigned using the RDP classifier with a SAB score ≥ 0.5. Microbiota Composition Analysis Statistical analyses were conducted using the R programming language and software (R Development Core Team 2012), specifically using the packages gplots, gdata, vegan [33], ade4 [34], phyloseq [35], and phangorn [36].OTU counts were normalized via simple division to their sample size and then multiplication by the size of the smallest sample.α-diversity and richness were estimated using the OTU table data and the functions "diversity" and "estimateR".A distance matrix for β-diversity analysis was computed using the "vegdist" function and the Bray-Curtis method.Principle coordinate analysis was conducted on the distance matrix data using "dudi.pco".Differential enrichment in bacterial taxa among groups was assessed using the linear discriminant analysis (LDA) effect size (LEfSe) algorithm [37].Kruskal-Wallis rank sum tests and post hoc pairwise Wilcoxon rank sum tests were used to detect differences between groups of variables.p values were corrected as necessary using the false discovery rate correction. Statistical Analysis Statistics were calculated with Prism software (version 9.4).A normality test (Shapiro-Wilk test) was systematically performed on the data.In the case of normal distribution, one-way ANOVA, followed by Tukey's multiple comparison test, was used.In case of lack of normal distribution, data were analyzed using the Kruskal-Wallis test, followed by Dunn's multiple comparison test.The level chosen for statistical significance was 5%. L. plantarum CNCM I−4459 Enhanced oral Glucose Tolerance by Inhibiting Glucose Metabolism Body weight and food intake were monitored weekly.This experiment was performed twice with a total number of 12-16 animals per group.As expected, PBS-HFD mice gained significantly more weight than PBS-CD-treated mice with no effect of L. plantarum either on body weight gain or on food efficiency ratio (FER) (measured as the ratio of body weight gain/calorie intake), cumulative food intake, or genes involved in satiety (Pyy and Gcg-1) (Figures 1 and S3).We then performed OGTT on mice that had been fasting for 6 h.HFD-fed mice exhibited a higher fasting glucose level (Figure 2A, p ≤ 0.001), as well as higher levels at every time tested during the OGTT (Figure 2B).Treatment with L. plantarum for 12 weeks significantly reduced fasting glucose levels and glucose levels from T15 min to T60 min after glucose challenge, to a range that was similar to that of CD-treated mice (Figure 2B, p ≤ 0.05 and p ≤ 0.001).Consequently, the AUC decreased in mice treated with this strain (Figure 2C, p ≤ 0.05) compared to PBS-HFD mice.Additionally, serum insulin levels were measured in fasting mice and 30 min after glucose administration.Only PBS-CD mice exhibited lower insulin levels at T0 and T + 30 min compared to PBS-HFD mice (Figure S3B).The HOMA-IR index revealed no significant insulin sensitivity in L. plantarum-treated mice (Figure 2D) compared to PBS-HFD mice.Finally, HFD-fed mice treated with L. plantarum CNCM I−4459 exhibited lower plasma fructosamine levels (HbA1c) than control mice (Figure 2E, p = 0.05). L. plantarum CNCM I−4459 Enhanced oral Glucose Tolerance by Inhibiting Glucose Metabolism Body weight and food intake were monitored weekly.This experiment was performed twice with a total number of 12-16 animals per group.As expected, PBS-HFD mice gained significantly more weight than PBS-CD-treated mice with no effect of L. plantarum either on body weight gain or on food efficiency ratio (FER) (measured as the ratio of body weight gain/calorie intake), cumulative food intake, or genes involved in satiety (Pyy and Gcg-1) (Figure 1, Figure S3).We then performed OGTT on mice that had been fasting for 6 h.HFD-fed mice exhibited a higher fasting glucose level (Figure 2A, p ≤ 0.001), as well as higher levels at every time tested during the OGTT (Figure 2B).Treatment with L. plantarum for 12 weeks significantly reduced fasting glucose levels and glucose levels from T15 min to T60 min after glucose challenge, to a range that was similar to that of CD-treated mice (Figure 2B, p ≤ 0.05 and p ≤ 0.001).Consequently, the AUC decreased in mice treated with this strain (Figure 2C, p ≤ 0.05) compared to PBS-HFD mice.Additionally, serum insulin levels were measured in fasting mice and 30 min after glucose administration.Only PBS-CD mice exhibited lower insulin levels at T0 and T + 30 min compared to PBS-HFD mice (Figure S3B).The HOMA-IR index revealed no significant insulin sensitivity in L. plantarum-treated mice (Figure 2D) compared to PBS-HFD mice.Finally, HFD-fed mice treated with L. plantarum CNCM I−4459 exhibited lower plasma fructosamine levels (HbA1c) than control mice (Figure 2E, p = 0.05).To decipher the molecular mechanisms underlying the improvement in glucose sensitivity, we measured the expression in the liver of the gluconeogenic G6pase gene and the main glucose transporters, Glut-2 and Glut-4.Although no significant modification was observed in PBS-HFD mice, treatment with L. plantarum significantly reduced the expression of G6pase (p ≤ 0.05) and the insulin-dependent Glut-4 transporter (p ≤ 0.01) compared to CD mice (Figure 3A,B).No change was observed in the expression of the bidirectional transporter Glut-2 (Supplementary Figure S4A).Compared to CD, HFD resulted in significant upregulation of ileal G6pase, but this effect was not observed in the ileum of mice treated with this strain (Figure 3C).To decipher the molecular mechanisms underlying the improvement in glucose sensitivity, we measured the expression in the liver of the gluconeogenic G6pase gene and the main glucose transporters, Glut-2 and Glut-4.Although no significant modification was observed in PBS-HFD mice, treatment with L. plantarum significantly reduced the expression of G6pase (p ≤ 0.05) and the insulin-dependent Glut-4 transporter (p ≤ 0.01) compared to CD mice (Figure 3A,B).No change was observed in the expression of the bidirectional transporter Glut-2 (Supplementary Figure S4A).Compared to CD, HFD resulted in significant upregulation of ileal G6pase, but this effect was not observed in the ileum of mice treated with this strain (Figure 3C).Altogether, these results suggested that supplementation with L. plantarum CNCM I−4459 partially restored glucose sensitivity in HFD-fed mice, in part by downregulating hepatic and ileal metabolism and insulin-dependent transport. Treatment with L. plantarum CNCM I − 4459 Decreased Concentrations of Circulating Lipids and Modulated Hepatic Lipid Metabolism Serum levels of TG, FFA, HDL-c, and LDL-c were analyzed in the different groups of mice (Table 1).As expected, HFD-fed mice harbored significantly increased levels of serum HDL-c, LDL-c, and TC compared to the control group, suggesting the onset of dyslipidemia.Interestingly, the HFD group treated with L. plantarum CNCM I−4459 had Altogether, these results suggested that supplementation with L. plantarum CNCM I−4459 partially restored glucose sensitivity in HFD-fed mice, in part by downregulating hepatic and ileal metabolism and insulin-dependent transport. Treatment with L. plantarum CNCM I−4459 Decreased Concentrations of Circulating Lipids and Modulated Hepatic Lipid Metabolism Serum levels of TG, FFA, HDL-c, and LDL-c were analyzed in the different groups of mice (Table 1).As expected, HFD-fed mice harbored significantly increased levels of serum HDL-c, LDL-c, and TC compared to the control group, suggesting the onset of dyslipidemia.Interestingly, the HFD group treated with L. plantarum CNCM I−4459 had similar levels of LDL-c as PBS-CD mice and significantly lower than PBS-HFD mice (p ≤ 0.0001).Several studies have reported an increase in lipolysis and/or decreased lipogenic activity in obese subjects.For this reason, we also analyzed the hepatic and adipocyte expression of genes involved in lipogenesis and β-oxidation.As described in Figure 3D, the lipogenic gene Fasn (p ≤ 0.0001) was downregulated in the livers of mice treated with L. plantarum compared with PBS-HFD mice.Interestingly, though, expression of the lipolytic gene Cpt1-a (p ≤ 0.0001) and the lipogenic Plin (p ≤ 0.001) gene also decreased compared to PBS-CD mice (Figure 3G).Furthermore, the Insig-2 gene expression (an insulin-dependent inhibitor of the lipogenic Srebp genes) was downregulated in L. plantarum-HFD mice compared to PBS-CD mice (Figure 3D, (p ≤ 0.0001).However, no significant up-regulation of other hepatic lipolytic and lipogenic genes tested in HFD-fed mice was observed (Supplementary Figure S4B-G).Also, no particular change in lipid metabolism in adipose tissues was observed, except for down-regulation of the adipogenic genes Ppar-α (in eAT), the lipogenic Plin (in eAT), and leptin (in vAT) in HFD-fed mice (Figure 3H-J).Interestingly, treatment with L. plantarum CNCM I−4459 significantly increased the expression of Ppar-α (Figure 3H, p ≤ 0.05).In addition, we assessed thermogenesis by measuring Ucp1-α expression and found significant regulation in all HFD-fed mice (Supplementary Figure S4H).Altogether, these data suggested that oral administration of L. plantarum resulted in a global downregulation of hepatic lipid metabolism (lipolysis and lipogenesis). Treatment with L. plantarum CNCM I-4459 Reduce ileal Inflammation Affect Moderately Epithelial Junctions HFD-induced obesity is often associated with inflammation.Here, systemic inflammation and gut permeability were assessed by measuring serum levels of TNF-α and LBP proteins.PBS-HFD mice had higher levels of TNF-α compared to PBS-CD mice (Figure 4A, p = 0.058) and a slight (but not significant) increase in LBP (Figure 4B, ns); no change was observed with L. plantarum treatment.In addition, ileal inflammation was assessed by measuring mRNA levels of Tnf -α (Figure 4C) as well as protein levels of TNF-α and IL-17.As shown in Figure 4E, PBS-HFD mice exhibited a significant increase in IL-17 protein expression compared to PBS-CD mice (p ≤ 0.01).However, no modification was observed either with L. plantarum treatment or in other tissues, including adipose tissue (Supplementary Figure S5). Since inflammation is often linked with disturbances in intestinal permeability, we measured the expression of tight junction (Claudin-2, Claudin-5, Occludin, and Zo-1) genes in the ileum and colon.In colon samples, only ZO-1 protein and Claudin-5 mRNA expression were dysregulated in PBS-HFD mice compared to their littermates (Supplementary Figure S6A-E).L. plantarum-fed mice only increased the expression of ZO-1 protein compared to PBS-HFD mice.In the ileum samples, HFD administration appeared to increase mRNA expression of Zo-1 (p ≤ 0.05) and Claudin-2 (p ≤ 0.01) but reduced ileal gene expression of Occludin (p ≤ 0.05) with no modification with L. plantarum administration (Supplementary Figure S6F-I).In parallel, we assessed the levels of cytokine IL-22, a key cytokine in epithelium homeostasis.In the ileum samples, mice fed with HFD exhibited a higher expression of Il-22 expression with no effect of the strain (Supplementary Figure S6J). Regarding Zo-1, because of the well-known link between inflammation, epithelium integrity, and oxidative stress, Nrf-2 gene expression was also measured in ileum samples, but no difference was observed in L. plantarum-fed mice (Supplementary Figure S6K).Since inflammation is often linked with disturbances in intestinal permeability, we measured the expression of tight junction (Claudin-2, Claudin-5, Occludin, and Zo-1) genes in the ileum and colon.In colon samples, only ZO-1 protein and Claudin-5 mRNA expression were dysregulated in PBS-HFD mice compared to their littermates (Supplementary Figure S6A-E).L. plantarum-fed mice only increased the expression of ZO-1 protein compared to PBS-HFD mice.In the ileum samples, HFD administration appeared to increase mRNA expression of Zo-1 (p ≤ 0.05) and Claudin-2 (p ≤ 0.01) but reduced ileal gene expression of Occludin (p ≤ 0.05) with no modification with L. plantarum administration (Supplementary Figure S6F-I).In parallel, we assessed the levels of cytokine IL-22, a key cytokine in epithelium homeostasis.In the ileum samples, mice fed with HFD exhibited a higher expression of Il-22 expression with no effect of the strain (Supplementary Figure S6J).Regarding Zo-1, because of the well-known link between inflammation, epithelium integrity, and oxidative stress, Nrf-2 gene expression was also measured in ileum samples, but no difference was observed in L. plantarum-fed mice (Supplementary Figure S6K). L. plantarum CNCM I −4459 Treatment Partially Reversed the Effect of Diet on Gut Microbiota Composition The composition of gut microbiota was assessed at week 12 of the food intervention.Permutational multivariate analysis of variance (PERMANOVA) analysis showed a minor insignificant effect of cage repartition (explained variation 9.9%, p = 0.112), and variation is explained by diet intervention (56%, p = 0.001) (Figure 5B).We observed decreased bacterial diversity and richness in the HFD group compared to both CD-fed (p ≤ 0.05) and L. plantarum-treated groups (Figure 5A). L. plantarum CNCM I −4459 Treatment Partially Reversed the Effect of Diet on Gut Microbiota Composition The composition of gut microbiota was assessed at week 12 of the food intervention.Permutational multivariate analysis of variance (PERMANOVA) analysis showed a minor insignificant effect of cage repartition (explained variation 9.9%, p = 0.112), and variation is explained by diet intervention (56%, p = 0.001) (Figure 5B).We observed decreased bacterial diversity and richness in the HFD group compared to both CD-fed (p ≤ 0.05) and L. plantarum-treated groups (Figure 5A). L. plantarum CNCM I−4459 Did Not Affect Fecal End-Products of Fermentation From fecal samples, we measured concentrations of SCFAs (acetate, butyrate, valerate, and propionate) and the branched-chain fatty acid isobutyrate.Compared to the control group, HFD had significantly reduced concentrations of acetate (p ≤ 0.0001) and increased concentrations of isobutyrate (p ≤ 0.05) and valerate (p ≤ 0.01).L. plantarum CNCM I−4459 supplementation to the HFD diet tended to reduce concentrations of valerate even though it did not reach significance (Figure 7A).In light of the high positive correlation between fecal butyrate concentration and the abundance of the bacterial families Bifidobacteriaceae and Clostridiaceae (rho: 0.54 and 0.53; p ≤ 0.05; Figure 7B), it is possible that the observed decrease in butyrate could be explained by the reduced relative abundance of Bifidobacterium, releasing substrates for butyrate-producers such as Clostridium.However, when a co-inertia analysis combining both SCFAs and microbiota compositions was performed, samples did not cluster according to diet or treatment (Figure 7B).For total bacterial load, Bifidobacteria and Lactobacillus/Leuconostoc groups were assessed by qPCR (Figure 6B).Our results showed an increase in Lactobacillus/Leuconostoc groups in HFD-fed mice (p ≤ 0.01) and an increase in Bifidobacteria in HFD-PBS mice (p ≤ 0.01).No modification was observed in the Lactobacillus/Leuconostoc group as a result of L. plantarum intervention; however, bacterial treatment decreased significantly in the Bifidobacteria group (p ≤ 0.0001). L. plantarum CNCM I−4459 Did Not Affect Fecal End-Products of Fermentation From fecal samples, we measured concentrations of SCFAs (acetate, butyrate, valerate, and propionate) and the branched-chain fatty acid isobutyrate.Compared to the control group, HFD had significantly reduced concentrations of acetate (p ≤ 0.0001) and increased concentrations of isobutyrate (p ≤ 0.05) and valerate (p ≤ 0.01).L. plantarum CNCM I−4459 supplementation to the HFD diet tended to reduce concentrations of valerate even though it did not reach significance (Figure 7A).In light of the high positive correlation between fecal butyrate concentration and the abundance of the bacterial families Bifidobacteriaceae and Clostridiaceae (rho: 0.54 and 0.53; p ≤ 0.05; Figure 7B), it is possible that the observed decrease in butyrate could be explained by the reduced relative abundance of Bifidobacterium, releasing substrates for butyrate-producers such as Clostridium.However, when a co-inertia analysis combining both SCFAs and microbiota compositions was performed, samples did not cluster according to diet or treatment (Figure 7B). Discussion In the present work, we aimed to assess the beneficial health effect of L. plantarum strain CNCM I−4459 on HFD-fed mice.Although we detected no effect of L. plantarum treatment on body weight or food intake, the probiotic treatment improved a few metabolic parameters. HFD-fed mice are known to develop dyslipidemia with elevated serum lipid levels [38].Circulating fatty acids enter directly into the liver for lipid synthesis [39].Here, L. plantarum treatment stabilized LDL-c serum levels to concentrations that were similar to those of CD-fed mice.Usually, LAB decreases hyperlipidemia via activation of β-oxidation and/or the inhibition of lipogenesis in the liver [21,40].However, Yoo et al. [41] Discussion In the present work, we aimed to assess the beneficial health effect of L. plantarum strain CNCM I−4459 on HFD-fed mice.Although we detected no effect of L. plantarum treatment on body weight or food intake, the probiotic treatment improved a few metabolic parameters. HFD-fed mice are known to develop dyslipidemia with elevated serum lipid levels [38].Circulating fatty acids enter directly into the liver for lipid synthesis [39].Here, L. plantarum treatment stabilized LDL-c serum levels to concentrations that were similar to those of CD-fed mice.Usually, LAB decreases hyperlipidemia via activation of β-oxidation and/or the inhibition of lipogenesis in the liver [21,40].However, Yoo et al. [41] demonstrated that a combination of strains of Latilactobacillus curvatus and L. plantarum decreased hepatic lipid droplets and resulted in weight loss through the downregulation of β-oxidation and fatty acid synthesis.Interestingly, here, the expression of lipogenic genes (Fas and Plin), lipolytic genes (Cpt1-α), and the insulin-dependent lipogenic inhibitor of Srebp (Insig-2) were downregulated in L. plantarum treated mice.Thus, this bacterial strain might lower hepatic lipid metabolism even if the liver dyslipidemia should be addressed in the future to confirm these observations.Additionally, obesity is characterized by an excess of fat stored in adipose tissue, and thus, the regulation of lipid metabolism in adipose tissue is a crucial target for research.Studies have revealed that a good prognostic indicator for obesity is higher adipocyte expression of oxidative genes [18,19].Among these, Ppar-α (peroxisome proliferator-activated receptor alpha) is a key regulator of fatty acid oxidation and adipocyte differentiation [42].Here, interestingly, mice fed the HFD and treated with L. plantarum showed increased Ppar-α expression in adipose tissue. Obesity is associated with disturbances in glucose tolerance, leading to insulin resistance and type-2 diabetes [43].The liver regulates glucose homeostasis and is the site of uncontrolled gluconeogenesis that can trigger hyperglycemia.In 1999, Rajas et al. [44] described the additional and crucial role played by G6pase (glucose 6 phosphatase, a limiting enzyme in gluconeogenesis) in the small intestine, particularly in diabetic rats.Notably, we observed here that treatment with L. plantarum inhibited glucose metabolism and lowered the expression of G6pase in the liver and the ileum.G6pase expression was not observed in colon samples, which was unsurprising since its expression is known to decrease along the gut [45].Recently, Balakumar et al. described an improvement in glucose tolerance and insulin sensitivity in HFD-fed mice as a result of probiotic intervention.They also reported no modifications in glucose transport proteins, with the exception of Glut-4 (whose expression is dependent on insulin) [43].We hypothesize that in a diabetic state (as observed in HFD-fed mice), G6pase and Gcg-1 expression are increased, which enhances plasmatic concentrations of glucose and insulin.Treatment with L. plantarum stabilized glucose levels by inhibiting glucose metabolism and transport in an insulin-dependent manner.Indeed, probiotic treatment significantly reduced glucose levels (including fasting levels) during oral glucose tolerance tests and consequently the AUC.These data suggest that L. plantarum prevented the development of hyperglycemia in HFD-fed mice. HFD-fed mice are characterized by inflammation phenotype [4,46].Indeed, several studies have reported increased levels of TNF-α, IL1-β, and IL-6 in adipose tissue that can lead to insulin resistance [47,48].This inflammation does not originate only from the adipose tissues but can also spread to the intestine [47].Although the present study confirmed that the HFD treatment induced systemic inflammation, a difference in TNF-α level was not found in the serum of L. plantarum-treated mice compared to PBS-HFD-fed mice or in intestinal sections.IL-17 is a pro-inflammatory cytokine that has been reported to be upregulated in obese humans [49,50] as well as in mice with diet-induced obesity [51].Our data are consistent with several studies that have reported substantial regulation of inflammatory pathways in the small intestine [47,48] by administration of probiotics.L. plantarum CNCM I−4459 tends to mitigate the IL-17 ileal inflammation. A strong connection between inflammation, epithelium integrity, and oxidative stress has been reported in the literature [49].HFD-fed mice are characterized by impairments to permeability that are related to the downregulation of tight junction proteins (especially ZO-1).Here, oral administration of L. plantarum CNCM I−4459 significantly increased ZO-1 expression compared with PBS-HFD mice.In parallel, the expression of junction proteins in the ileum was reduced since no significant modulation was observed compared to PBS-CD mice.Interestingly, although HFD is known to disturb epithelial junctions, we found here that, overall, HFD intervention increased the expression of junction genes.The contrast between measurements of gene expression and protein levels suggests a compensatory effect that maintains protein levels and restores gut integrity.Here, these modifications in ileal junction proteins were correlated with a slightly lower level of IL-22.There are some discrepancies in the literature about the impact of metabolic syndromes on IL-22 secretion.Some data report that IL-22 modulates gut epithelium integrity and homeostasis, and recombinant IL-22 has been found to restore the barrier and metabolic parameters in HFD-fed mice [52,53].In contrast, Garidou et al. [54] described a lower level of intestinal IL-22.This contrast might be explained in part by the different locations (adipocyte, splenocyte, or proximal vs. distal intestine) targeted by the different studies [53][54][55].Taken together, the modifications observed as a result of treatment with L. plantarum CNCM I−4459 argue in favor of a restoration of the physiological state since this treatment seemed to normalize levels of both tight junction proteins and IL-22 to those observed in CD-fed mice.Additionally, these regulations seemed to be independent of oxidative stress since no modification was observed for Nrf-2.However, only Nrf-2 gene expression was assessed, and not oxidative activity; thus, these data should be interpreted with caution. Finally, accumulating reports reinforce the relationship between microbiota dysbiosis and metabolic disorders, such as diet-related obesity [56,57].Here, we demonstrate that treatment with L. plantarum CNCM I−4459 was able to partially reverse HFD-induced dysbiosis by increasing microbial richness.Interestingly, L. plantarum CNCM I−4459 counteracted the increase in the ratio of Firmicutes to Bacteroidetes that has been linked with obesity [1,2].However, the probiotic intervention had no clear effect on microbial composition, as samples clustered only according to type of diet (as demonstrated via analyses of beta diversity).Notably, HFD induced the over-representation of Firmicutes, mainly Oscillobacter, Dorea, Moryella, Lactobacillus, Roseburia, and Blautia, and Actinobacteria, with Olsenella.Instead, genera such as Akkermansia, Prevotella, or Barnesiella, which have been associated with healthy phenotypes, were under-represented [58][59][60][61].In most previous studies, HFD intervention induces a decrease in Bifidobacterium [46].However, a few studies have reported, as we do here, an increase in this genus; this phenomenon has been explained mainly as a function of the bifidogenic activity of maltodextrin, which is present in higher amounts in HFD [40,62,63].Treatment with L. plantarum CNCM I−4459 modulated microbial composition only slightly, with an increase in Lactobacillus (probably due to the daily oral administration of L. plantarum CNCM I−4459) and Prevotella and a decrease in Roseburia, Anaerostipes, and Stomatobaculum (Lachnospiraceae); Anaerobacter and Anaerotruncus (Clostridiaceae); and Bifidobacterium. Administration of HFD is commonly linked with alterations in SCFA content, as HFDs are low in fiber (one of the substrates for intestinal bacterial fermentation), and animals fed HFD exhibited modified levels of SCFAs [64].In our model, HFD did not affect the major SCFA products (butyrate and propionate) but led to a decrease in acetate, along with significant increases in valerate and isobutyrate.Treatment with L. plantarum CNCM I−4459 tended to reduce fecal butyrate content, probably linked to lower butyrate producers such as Roseburia spp.Several studies have reported beneficial health effects of SCFAs on host physiology, especially butyrate (reviewed in [65]).However, it is not clear if SCFA modulation directly contributes to the obesity phenotype or if it is instead a consequence of microbial disturbance.A previous correlation analysis revealed a positive link between butyrate and members of the Bifidobacteriaceae and butyrate-producing Clostridiaceae [66], which here were both less abundant in mice treated with this strain of L. plantarum.In addition, and independently of a direct microbial effect on SCFA release, the increase in permeability observed in obese animals could favor intestinal absorption and consequently impact the quantification in fecal samples.Altogether, these data suggest that (i) HFD significantly disturbed the composition of gut microbiota and (ii) intervention with the probiotic strain restored these altered communities to their "healthy" composition. All of the metabolic parameters measured here were positively correlated with the bacterial taxa that were enriched in HFD-fed mice.The bacterial families that were positively correlated with probiotic treatment (Lachnospiraceae, Bacteroidaceae, and Prevotellaceae) were negatively correlated to fructosamine levels.This is in agreement with previous work showing the beneficial effect of Prevotella on glucose tolerance [60].In our study, FFA content was positively correlated with the abundance of Clostridiaceae, while LDL-c was positively correlated with Bifidobacteriaceae and Lachnospiraceae and negatively correlated with Succinivibrioceae.However, these data should be treated carefully because we performed our microbiota analysis on fecal samples, and therefore, we cannot account for any changes in composition that may have occurred in different regions of the gut.In the near future, it will be particularly interesting to determine if the microbial community is different in the ileal mucosa, where we observed several important physiological changes.Additionally, gut microbiota was only assessed at the taxonomic composition, and we cannot exclude that changes at the functional level (transcriptomic analysis) could explain the observed improvements.Of note, previous studies have also shown the effects of probiotic consumption on different hematological parameters in various animal models, such as mice, cats, dogs, etc. [67,68]. In conclusion, treatment with L. plantarum CNCM I−4459 improved global metabolic parameters that were compromised by HFD administration.Here, we show that the CNCM I−4459 probiotic strain can synergistically and moderately act at different levels to improve host metabolism and favor the maintenance of homeostasis.L. plantarum CNCM I−4459 tended to restore the microbial composition and resulting SCFAs (even no significant) content to healthy profiles (here reflected by CD-fed mice) rather than favoring the expansion of certain detrimental species, with the overall result being an alleviation of HFD-induced microbiota dysbiosis.Overall, L. plantarum CNCM I−4459 ameliorated Figure 1 . Figure 1.Effect of L. plantarum CNCM I−4459 supplementation on body weight (A); food efficiency ratio (B).Mice were separated into 3 groups: mice treated with PBS and fed with CD (control diet) (PBS-CD group, n = 16), mice treated with PBS and fed with HFD (high-fat diet) (PBS-HFD group, Figure 5 . Figure 5.Effect of L. plantarum CNCM I −4459 on the gut microbiota structure: (A) Microbial diversity (** and *** respectively represent p < 0.01 and p < 0.001, ns = no significant, Kruskal-Wallis rank sum test (pairwise Wilcoxon rank sum post hoc test)); (B) principal coordinate analysis (PCoA) plot depicting the inter-individual variability based on the microbiota composition.Each dot is a unique composition of a single fecal sample.Distances between dots highlight the degree of similarity among samples.Difference between groups of samples stratified according to diets and/or CNCM I −4459 was tested by PERMANOVA.Samples in red correspond to mice fed with CD (n = 8 mice), samples in blue correspond to mice fed with HFD and treated with L. plantarum CNCM I −4459 (n = 7 mice), and samples in green correspond to mice fed with HFD (n = 8 mice); (C).Ratio Firmicutes/Bacteroidetes.The ratio is represented by box and whiskers plots (median and quartile values), Kruskal-Wallis rank sum test (pairwise Wilcoxon rank sum post hoc test). Bioengineering 2023, 10 , 1151 12 of 21 Figure 6 . Figure 6.L. plantarum CNCM I −4459 modulated microbiota. A. Differential taxa abundance between PBS-HFD and L. plantarum CNCM I−4459-HFD.The LEfSe algorithm uses the effect size of each differentially abundant feature, and significance is subsequently investigated using a set of pairwise tests using the (Unpaired) Wilcoxon rank-sum test; B. Measures of fecal bacteria by qPCR.Data represent box and whiskers plots (mean, minimal, and maximum values).Data were analyzed with Kruskal-Wallis test (Dunn's post hoc test) and compared to PBS-administered HFD-fed mice.** and **** represent, respectively, p < 0.01 and p < 0.0001. Figure 6 . Figure 6.L. plantarum CNCM I −4459 modulated microbiota.(A).Differential taxa abundance between PBS-HFD and L. plantarum CNCM I−4459-HFD.The LEfSe algorithm uses the effect size of each differentially abundant feature, and significance is subsequently investigated using a set of pairwise tests using the (Unpaired) Wilcoxon rank-sum test; (B).Measures of fecal bacteria by qPCR.Data represent box and whiskers plots (mean, minimal, and maximum values).Data were analyzed with Kruskal-Wallis test (Dunn's post hoc test) and compared to PBS-administered HFD-fed mice.** and **** represent, respectively, p < 0.01 and p < 0.0001. Figure 7 . Figure 7. Modulation of fecal short-chain fatty acids: (A) Level of SCFA; data are represented as box and whiskers plots (mean, minimal, and maximum values).Data were analyzed with Kruskal-Wallis test (Dunn's post hoc test) and compared to PBS-administered HFD-fed mice.*, **, and **** represent, respectively, p < 0.05, p < 0.01, and p < 0.0001; (B) Correlation between microbiota composition at family level and SCFA.Samples in red correspond to mice fed with CD, samples in blue Figure 7 . Figure 7. Modulation of fecal short-chain fatty acids: (A) Level of SCFA; data are represented as box and whiskers plots (mean, minimal, and maximum values).Data were analyzed with Kruskal-Wallis test (Dunn's post hoc test) and compared to PBS-administered HFD-fed mice.*, **, and **** represent, respectively, p < 0.05, p < 0.01, and p < 0.0001; (B) Correlation between microbiota composition at family level and SCFA.Samples in red correspond to mice fed with CD, samples in blue correspond to mice fed with HFD and treated with L. plantarum CNCM I−4459, and samples in green correspond to mice fed with HFD. Figure 8 . Figure 8.Effect of HFD on the gut microbiota and metabolic parameters correlation.Samples in red correspond to mice fed with CD, samples in blue correspond to mice fed with HFD and treated with L. plantarum CNCM I−4459, and samples in green correspond to mice fed with HFD. Figure 8 . Figure 8.Effect of HFD on the gut microbiota and metabolic parameters correlation.Samples in red correspond to mice fed with CD, samples in blue correspond to mice fed with HFD and treated with L. plantarum CNCM I−4459, and samples in green correspond to mice fed with HFD.
8,607.2
2023-10-01T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Promoter methylation status of the tumor suppressor gene SOX11 is associated with cell growth and invasion in nasopharyngeal carcinoma Background The transcription factor SOX11 is one of members of the SRY box-containing (SOX) family emerging as important transcriptional regulators. In recent years, up-regulation of SOX11 has been detected in various types of solid tumors. In this study, the effects of promoter methylation of the SOX11 gene on SOX11 expression and cell growth and invasion of nasopharyngeal carcinoma were investigated. Methods In this study,methylation-specific PCR and real time quantitative PCR have been applied to investigate the effect of promoter methylation of the SOX11 gene on SOX11 expression in the nasopharyngeal carcinoma and chronic inflammation tissues. The nasopharyngeal carcinoma cell line (CNE2) was treated with 5-aza-2'-deoxycytidine. The effect of promoter methylation of SOX11 on growth and invasion of nasopharyngeal carcinoma cells was detected with MTT test and Boyden chamber Matrigel invasion assay. Results No or weak expression of SOX11 mRNA was detected in the nasopharyngeal carcinoma tissues of SOX11 gene promoter methylation. Strong expression of SOX11 mRNA was detected in the nasopharyngeal carcinoma tissues of SOX11 gene promoter unmethylation and chronic inflammation tissues of pharynx nasalis. SOX11 mRNA and protein were re-expressed, SOX11 gene was demethylated, and growth and invasion of cells were inhibited in CNE2 cell line after 5-aza-2'-deoxycytidine treatment. Conclusions The results of the study indicate that expression of SOX11 mRNA and protein were related to SOX11 gene methylation status. SOX11 gene methylation may be plays a role in growth and invasion of nasopharyngeal carcinoma cells. Introduction Nasopharyngeal carcinoma (NPC) is a common tumor in the head and neck. There are a high incidence of NPC in south of China. Its pathogenesis is not very clear. It may be related to a variety of factors. In recent years, epigenesis of gene attracted much attention from researchers. Abnormality of DNA methylation is an important mechanism of epigenetic regulation. Methylation status of gene promoter is related to gene activity [1,2]. DNA methylation plays an important role in tumorigenesis. CpG island methylation of tumor suppressor gene resulted in inactivation of the gene transcription has become an important part of cancer epigenetics research. Multiple tumor suppressor genes inactivated by promoter CpG island methylation have been found in a variety of tumor tissues and cells. Such as promoter hypermethylation and BRCA1 inactivation in sporadic breast and ovarian tumors [3], hypermethylation of the APC (adenomatous polyposis coli) gene promoter region is involved in human colorectal carcinoma [4], incidence and functional consequences of hMLH1 promoter hypemethylation in colorectal carcinoma [5], hypermethylation around the promoter may be a mechanism of E-cadherin inactivation in human carcinomas [6]. The transcription factor SOX11 is one of members of the SRY box-containing (SOX) family emerging as important transcriptional regulators, which as a whole controls cell fate and differentiation [7]. Twenty SOX genes have been identified in mouse and human genomes. All SOX genes contain a DNA-binding high mobility group (HMG) domain and protein specific domains implicated in activation and repression of gene transcription [8]. It has been found that SOX11 plays an important role in the development of nervous system and adult neurogenesis [9,10]. SOX11 up-regulation has been detected in various types of solid tumors, such as gliomas and epithelial ovarian tumors [11,12]. Vegliante found that SOX11 expression is related to methylation of SOX11 gene promoter in lymphoid neoplasms [13]. In the present study, we have performed on methylation of SOX11 gene, inclouding DNA methylation in the tissues of nasopharyngeal carcinoma and DNA demethylation in the CNE2 cell line (human nasopharyngeal carcinoma cell line). The findings shows that weak expression of SOX11 is related to methylation of SOX11 gene promoter in the tissues of nasopharyngeal carcinoma, and SOX11 re-expression is associated with demethylation of SOX11 gene by 5-aza-2'-deoxycytidine treated in CNE2 cells. Clinical material Fifty-six tissues specimens of pharynx nasalis were included in the study. All the biopsies were obtained from patients with consent before treatment at the Department of Otolaryngology of Guangming New District People's Hospital of Shenzhen. The ratio of male patients and female patients is 4.6 to 1. The age range was 16-62 years with a mean age of 49 years. All specimens were subjected to histological diagnosis by a pathologist. There are fortythree nasopharyngeal carcinoma (NPC) and thirteen chronic inflammation tissues. 43 nasopharyngeal carcinoma tissues are all undifferentiated nasopharyngeal carcinoma. On the basis of TNM stage classification (UICC 2002), 7 (16.3%) patients had stage I disease, 13 (30.2%) patients had stage II disease, 11 (25.6%) patients had stage III disease, 12(27.9%) patients had stage IV disease. As for lymph node metastasis in the neck, 29 patients had lymph node metastasis, and 14 patients had no lymph node metastasis. No chemotherapy or radiotherapy was given to patients with nasopharyngeal carcinoma before biopsy. RNA isolation and real-time quantitative PCR The total RNA was isolated from tissues or cells by using Trizol regents. Reverse transcription was performed with Revert Aid First Strand cDNA Synthesis Kit. The cDNA was amplified by following TOYOBO THUNDERBIRD SYBR qPCR Mix kit with the following primer specific either for SOX11 or the house-keeping gene β-actin (primers were synthesized by Invitrogen Biotechnology Co., LTD). β-actin: 5'-GTCCACCGCAAATGCTTCTA-3'and5'-TGCTGTCACCTTCACCGTTC-3',SOX11:5'-A AGAACATCACCAAGCAGCACC-3' and 5'-TGTGA ACACCAGGTCGGAGAAG-3'. Real-time PCR products were detected using SLAN Fluorescence Quantitative PCR Detection System. The β-actin gene was used as internal control. Western blot analysis Total nuclear extracts were isolated and analyzed on a SDS-polyacrylamide gel and transferred onto a polyvinylidene difluoride membrane. Immunoblotting was performed using a sheep polyclonal antibody SOX11 and an anti-β-actin antibody. The membranes were washed with Tris-buffered saline and then incubated with a 1:3000 dilution of secondary antibodies. The proteins were visualized by using a chemiluminescence detection kit from Perkin-Elmer. Cell culture The CNE2 cell line, a NPC cell line, was obtained from the China Center for Type Culture Collection. CNE2 cell line was cultured in RPMI-1640 medium (HyClone, Sout Logan, UT) supplemented with 10% (v/v) fetal bovine serum (Sijiqing Biological Engineering Materials Co, Hangzhou, China) at 37°C in 5% CO2. MTT test The proliferation assays were performed by MTT test. The CNE2 cell lines were digested using 0.25% trypsin when the cells were in the logarithmic phase of growth. Then, the cell lines were seeded at a concentration of 1 × 10 5 cells/ml. The cells were seeded onto 96-well plates at a density of 1 × 10 4 cells/well in triplicates, and treatment for 24 hours to allow the cells to attach. Then, the medium containing 5-aza-2'-deoxycytidine (5aza-cdr, 0、0.5、1、5、10、20、40、80、160 μmol/L) was added in each well in 96-well plates. Meanwhile, zeroing wells were arranged in these plates. 50 μl of MTT solution (5 mg/mL) was added in each well after the cells have been cultured for 24 hours. The clear supernatant liquid was blotted and 200 μl of dimethyl sulphoxide (DMSO) was added into each well 2 hours later. The light absorption of solution at 570 nm was determined by using a microplate reader. The inhibition ratio of the drug to the cells was calculated with a formula(IR = (1-experimental group)/control group × 100%.). Boyden chamber Matrigel invasion assay The invasive capacity of control group (without 5-aza-cdr) and experimental group (with 80 μmol/l 5-aza-cdr) of CNE2 cells had been examined by using two compartments: Boyden chambers assay (Corning incorporated, New York, USA) and Matrigel basement membrane matrix (BD Biosciences, New Jersey, USA). All cells were analyzed for their viability, and an equal number of viable cells (10 5 ) was added to the upper chamber and allowed to invade through the Matrigel onto the filters for 24 hours. At the end of the incubation period, the filters were washed, fixed, and stained. The invading cells were then examined and counted in 10 randomly selected fields under a light microscope at × 400 Magnification. Then,ten random fields for each set of experiments were analyzed and the average number of cells invaded was calculated. Statistical analysis All statistical analysis was performed using the Statistical Package for Social Sciences (SPSS,version17.0). Chi-square test was used to assess the difference of SOX11 gene methylation between with lymph node metastasis and without lymph node metastasis in the neck and among each TNM stage in the nasopharyngeal carcinoma. The t test was used to assess the difference of invasion capacity of CNE2 cells before and after 5-aza-cdr treatment. A P value less than 0.05 was considered statistically significant. Results Methylation status of SOX11 gene in nasopharyngeal carcinoma and chronic inflammation tissues SOX11 gene promoter methylation was found in 29 of 43 (67.4%) nasopharyngeal carcinoma tissues. None of 13 chronic inflammation tissues of pharynx nasalis showed SOX11 gene promoter methylation (Figure 1). Chi-square test showed that there was no significant difference in methylation rate of the SOX11 gene promoter among the samples from patients with nasopharyngeal carcinoma in different TNM stages. However, the methylation rate of the SOX11 gene promoter in the nasopharyngeal carcinoma tissues from patients with lymph node metastasis is significantly higher than nasopharyngeal carcinoma tissues from patients without lymph node metastasis ( Table 1). Expression of SOX11 mRNA in nasopharyngeal carcinoma and chronic inflammation tissues No expression or very weak expression of SOX11 mRNA was detected in the nasopharyngeal carcinoma tissues with SOX11 gene promoter methylation. Strong expression of SOX11 mRNA was found in the nasopharyngeal carcinoma tissues with SOX11 gene promoter unmethylation and chronic inflammation tissues of pharynx nasalis ( Figure 2). Figure 1 Methylation status of SOX11 gene in the nasopharyngeal carcinoma and chronic inflammation tissues. Lane M, Amplified product with primers recognizing methylated sequences; Lane U, Amplified product with primers recognizing unmethylated sequences. Tm: SOX11 gene methylated nasopharyngeal carcinoma tissues; Tu: SOX11 gene unmethylated nasopharyngeal carcinoma tissues; Ci: Chronic inflammation tissues in pharynx nasalis. The results showed that only methylated product was amplified in the nasopharyngeal carcinoma tissues with SOX11 gene promoter methylation, only unmethylated product was amplified in the chronic inflammation tissues and nasopharyngeal carcinoma tissues with SOX11 gene promoter unmethylation. Expression of SOX11 protein in nasopharyngeal carcinoma and chronic inflammation tissues Weak expression of SOX11 protein was detected in the nasopharyngeal carcinoma tissues with SOX11 gene promoter methylation. Strong expression of SOX11 protein was showed in the nasopharyngeal carcinoma tissues with SOX11 gene promoter unmethylation and chronic inflammation tissues of pharynx nasalis (Figure 3). MTT test The MTT test showed that the inhibition of cells growth is more and more obvious with the increase of drug concentration. When the drug concentration is 80 μmol/L, the inhibition rate of cells growth is almost 50% (Figure 4). Effect of 5-aza-cdr on invasion capacity of CNE2 cells The invasive capacity of control group (without 5-aza-cdr) and experimental group (treatment with 5-aza-cdr) CNE2 cells had been examined by using Boyden chamber Matrigel invasion assay. As expected, the invading cells number was significantly decreased in the CNE2 cells treated with 5-aza-cdr(=8.40 ± 1.26) than the CNE2 cells without any treatment(=12.10 ± 1.20, t = 6.718, p = 0.000). These studies show that invasive capacity of CNE2 cells treated with 5-aza-cdr was significantly decreased than CNE2 cells without any treatment ( Figure 5). The electrophoretogram showed that weak expression of SOX11 protein was found in the nasopharyngeal carcinoma tissues with SOX11 gene promoter methylation, strong expression of SOX11 protein was found in the chronic inflammation tissues and nasopharyngeal carcinoma tissues with SOX11 gene promoter unmethylation. Effect of 5-aza-cdr on methylation of SOX11 gene in the CNE2 cells In order to detect the effect of 5-aza-cdr on methylation of SOX11 gene in the CNE2 cells, 10 5 CNE2 cells were seeded onto 6-well plates in each well and incubated for 24 hours. Then, the medium containing 5-aza-cdr (80 μM) was added randomly to 3 wells. The other 3 wells were not added anything. All CNE2 cells in 6-well plates were digested using 0.25% trypsin after incubating 48 hours. The methylation status of SOX11 gene was detected by using methylation-specific PCR in the CNE2 cells. The results showed that SOX11 gene was demethylated after treating with 5-aza-cdr in the CNE2 cells ( Figure 6). Effect of 5-aza-cdr on expression of SOX11 mRNA and protein in the CNE2 cells In order to detect the effect of 5-aza-cdr on expression of SOX11 mRNA and protein in the CNE2 cells, 10 5 CNE2 cells were seeded onto 6-well plates in each well, and treated the cells using the same methods above. CNE2 cells in 6-well plates were digested with 0.25% trypsin after incubating 48 hours. RT-PCR and Western Blot results showed that re-expression of SOX11 mRNA and protein was found after treating with 5-aza-cdr in CNE2 cells (Figures 7 and 8). Discussion The transcription factor SOX11 plays an important role in embryonic development of the central nervous system and in developing neuron growth and survival as well as recovery of adult neurons following injury tissue [9,10,14]. Several studies have recently demonstrated that SOX11 is up-regulated in various solid tumors, such as lymphoid neoplasms [15], gliomas and epithelial ovarian tumors [11,12]. Brennan [12] revealed that a strong nuclear expression of SOX11 in epithelial ovarian cancer, which correlated with a prolonged recurrence-free survival. So, he suggested that SOX11 plays a functional role in regulation of tumor growth. Hide detected that over-expression of SOX11 prevents tumorigenesis of human glioma initiating Figure 4 Growth inhibition plot of CNE2 cells after treated by 5-aza-cdr. The plot showed that CNE2 cell growth inhibition is more and more obvious with the increase of 5-aza-cdr concentration. When the 5-aza-cdr concentration is 80 μmol/L, the inhibition rate of CNE2 cells growth ratio is almost 50%. cells [16]. Gustavsson found that SOX11 expression can be epigenetically silenced through DNA methylation in a subset of B cell malignancies [17]. In this study, the expression and methylation status of SOX11 gene was detected in nasopharyngeal carcinoma and chronic inflammation tissues of pharynx nasalis. We found that weak expression of SOX11 correlate with methylation of SOX11 gene in nasopharyngeal carcinoma tissues. Epigenetic mechanism of gene included DNA methylation, histone modifications and RNA interference. DNA methylation is the main epigenetic event in humans, and changes in the DNA methylation pattern play an important role in tumorigenesis [18]. In recent years, the study of tumor suppressor gene promoter methylation has become an important content in occurrence and development of cancer. Many studies have demonstrated that Figure 6 Effect of 5-aza-cdr on SOX11 gene methylation in the CNE2 cells. Lane M, Amplified product with primers recognizing methylated sequences; Lane U, Amplified product with primers recognizing unmethylated sequences. Only methylated product was amplified in the CNE2 cells without any treatment. Only unmethylated product was amplified in the CNE2 cells after treated by 5-aza-cdr. It showed that SOX11 gene was demethylated by 5-aza-cdr in the CNE2 cells. multiple cancer-related genes promoters are frequently methylated in a variety of human cancers [19][20][21]. DNA methylation is a reversible biochemical modification [22]. The transcriptional inactivation of tumor suppressor gene caused by CpG island methylation can be reversed with DNA methyltransferase inhibitor (5 -aza-2'-deoxycytidine). The reversal (CpG island demethylation) can restore the expression of tumor suppressor gene, and then inhibit cell proliferation and tumor growth [23]. Therefore, restoring the expression of tumor suppressor genes by using DNA methyltransferase inhibitors has become one of the new means of cancer gene therapy. The previous study demonstrated that loss of DAPK expression is associated with aberrant promoter region methylation in nasopharngeal cancer cell line(CNE2) and laryngeal cancer cell line(Hep-2), 5 -aza-2'-deoxycytidine may reactivate DAPK genes silenced by promoter region hypermethylation and can slow the growth of Hep-2 cells and CNE2 cells in vitro and in vivo [24,25]. In the study, the changes of growth and invasion of cells have been detected after being treated with 5 -aza-2'-deoxycytidine in CNE2 cells. The data showed that the inhibition of CNE2 cell growth increased with the increase of drug concentration, invasive capacity of CNE2 cells was significantly decreased, reexpression of mRNA and protein of SOX11 was detected, and SOX11 gene was demethylated after treating with 5aza-cdr in the CNE2 cells. These results showed that reexpression of SOX11 mRNA and protein may be one of the factors which decrease the growth and invasion capacity of CNE2 cells. Because 5 -aza-2'-deoxycytidine is a DNA methyltransferase inhibitor. It can be reversed the methylated gene in the course of DNA copy. Nasopharyngeal carcinoma is a common tumor in head and neck. The rradiotherapy is main treatment of nasopharyngeal carcinoma. In recent years, although the technique and equipment of radiotherapy progressive updating, but the therapeutic effect of nasopharyngeal carcinoma is not greatly improved. It is because pathogenesis of nasopharyngeal carcinoma is not very clear. Many studies have explored the pathogenesis and therapeutic effect of nasopharyngeal carcinoma. Most of these studies are genetics and epigenetics. The study of tumor suppressor gene promoter methylation is paid increasing attention. In previous study, we treated CNE2 cells using 5 -aza-2'deoxycytidine, its proliferation and growth were significantly inhibited, and re-expression of DAPK gene silenced through DNA methylation was found in CNE2 cells [24]. In the present study, after CNE2 cells were treated with 5 -aza-2'-deoxycytidine, SOX11 gene was demethylated and re-expressed, the growth and invasion of CNE2 cells were inhibited. The growth and invasion inhibition of CNE2 cells is probably associated with re-expression of various tumor suppressor genes. The SOX11 gene is one of those tumor suppressor genes. Therefore, SOX11 gene methylation may play a role in growth and invasion of nasopharyngeal carcinoma cells. Conclusions In conclusion, the data provides a comprehensive characterization of the epigenetic mechanisms about SOX11 deregulation in nasopharyngeal carcinoma. No or weak expression of SOX11 gene was detected in some nasopharyngeal carcinoma tissues with DNA methylation. Strong expression of SOX11 gene was found in chronic inflammation tissues of pharynx nasalis and some nasopharyngeal carcinoma tissues with DNA unmethylation. After CNE2 cells were treated with 5 -aza-2'-deoxycytidine, SOX11 gene expression was recovered,and growth and invasion of CNE2 cells were inhibited. It showed that SOX11 expression may be one of the factors that decrease the growth and invasion capacity of CNE2 cells. In a word, additional studies are required to elucidate which is the functional role of the illegitimate SOX11 expression in nasopharyngeal carcinoma.
4,205.6
2013-11-05T00:00:00.000
[ "Biology", "Medicine" ]
Magnetoresistive effect in YbxMn1-xSat small concentrations The results of measurements of electrical resistivity without a field and in magnetic field of 0.8 T in the temperature range 100 K < T < 450 K for compositionYbxMn1-xS with x = 0.05, 0.1 are presented. For the x = 0.05 the gigantic magnetoresistive effect at room temperature is found. Introduction The relationship of magnetic and electrical properties is an important factor for creating electronic devices that operate on new principles and open new directions, such as spintronics [1][2][3] and new materials of multiferroics [4][5][6][7][8][9]. The interaction of electrons with an elastic lattice can also manifest itself in the resistive properties of semiconductors [10][11][12][13]. A strong magnetoresistive effect was detected in the vicinity of the orbital ordering in the two-orbit Hubbard model with ¼ the electron filling function at the site [14]. The resistance detects a small maximum in the region of formation of the orbital ordering [15], the electron density of states at the Fermi level splits in the magnetic field, which leads to an increase in the resistance in the paramagnetic phase [16][17]. The formation of an orbital ordering is accompanied by lattice deformations and changes in the magnetic state [18][19][20]. In chalcogenide compounds with polymorphic transitions, negative magnetoresistance is associated with tunneling of electrons having the same spin orientation in the magnetic field [21][22][23]. Manganese sulfides have a NaCl-type crystal FCC lattice with a constant unit cell parameter a = 0.5222 nm (MnS). Ytterbium sulfides have a NaCl-type crystal FCC lattice with a constant lattice а= 0.5693 nm (YbS). When the critical pressure value P = 8 GPa is reached, the YbS lattice is compressed by 12 % [24]. It can be expected that when replacing of manganese cations with ytterbium ions, the pressure of nearest neighbors will lead to a change in the valence of ytterbium ions and to the formation of a metal bond. If the valence of the metal to be +3 and the sulfur to be -2, then each unit cell containing four formula units of YbS will have four electrons that do not participate in the Me-S bond. These electrons will take part in the Me-Me bond and be collectivized. The formation of chemical bonds between ytterbium and manganese ions induces rearrangement of the electronic structure in a solid solution and changes in the magnetic and transport properties [25]. The purpose of this work is to determine the optimal conditions for the occurrence of the magnetoresistive effect by temperature and concentration. To achieve this goal, the temperature dependences of the resistance in the magnetic field will be measured and a correlation with the elastic system will be established from the temperature dependences of the coefficient of thermal expansion of the lattice. Experimental results and discussion X-ray diffraction analysis was performed on the DRON-3 installation. X-ray images and the crystal structure of Yb x Mn 1-x S compositions were studied at room temperatures on the polished side surfaces of parallelepipeds: in the initial state after their preparation, after measurements of electrical measurement up to 500 K. The reflexes, different from the FCC structure are not observed. The size of a unit cell in a solid solution of Yb x Mn 1-x S increases linearly with the concentration. Electrical resistivity measurements were carried out using a four-probe compensation method at a direct current in the temperature range of 80 K -500 K. Temperature dependences of electrical resistance for solid solutions Yb x Mn 1-x S are shown in figure 1. The dependence of resistance on temperature has an activation form. They have a typical semiconductor type and do not differ qualitatively from the temperature dependence ρ (Т) for MnS for compositions with x ≤ 0.1. When heating a solid solution of Yb 0.05 Mn 0.95 S, the activation energy increases 1.7 times at T = 440 K. With increasing concentration the activation energy is decreased, and the temperature shifts to the low temperature range to T = 390 K for x = 0.1. Replacement of manganese with ytterbium leads to an increase in the concentration of current carriers and a decrease in the activation energy. The absence of plateau in the temperature range 300 -500 K of ρ (Т) in the solid solution at magnetic field is observed. The energies of impurity states (E i ) are located below the bottom of the conduction band (E c ) by the value Δ E = E c -E i and with increasing concentration, the energy interval decreases. The influence of the magnetic field on the electrical resistance was studied without a field and in a magnetic field H = 0.8 T. In a solid solution of Yb x Mn 1-x S, the resistance increases in the magnetic field H = 0.8 T in the range of 150 K < T < 450 K and the relative change in the specific electrical resistance at magnetic field reaches a maximum at T = 329 K. In the area of room temperatures, have a giant positive magnetoresistive effect with a change in the resistance value by an order of magnitude (figure 2a). The value of the activation energy does not change in the range of 150 K -300 K, and the preexponential is decreased tenfold in the magnetic field. In Yb x Mn 1-x S c x = 0.1, the activation energy of current carriers in the magnetic field decreases and the magnetoresistive coefficient changes sign from negative to positive at T = 260 K (figure 2b). The maximum value (ρ(Н)ρ(0)) / ρ(0) is observed at T = 360 K and the magnetoresistive effect disappears asymptotically at T = 500 K. The formation of an orbital ordering [26,27] of the glass state type [28,29] can lead to a change in elastic characteristics, both static and dynamic, for example, a change in the impedance in a magnetic field [30]. Below the coefficient of thermal expansion of the lattice as a function of temperature is studded. The thermal expansion coefficient was measured using a Netzsch dilatometer DIL-402C in the temperature range of 200 K -750 K with a heating rate of 5 K/min. Fused quartz and corundum standards were used to calibrate and account for the thermal expansion of the measuring system. The results of strain studies (ΔL / L) and thermal expansion coefficient (α (T)) for samples Yb x Mn 1-x S (x = 0.05, 0.1) are shown in figure 3. When heated, the coefficient of thermal expansion decreases and has a minimum at T = 275 K and a sharp jump in α (T) for the sample with x = 0.1. At this temperature, the permittivity and magnetic capacity have maximum values, and the magnetoresistive effect changes sign from negative to positive. Above T > 480 K, the slope in the temperature dependence α (Т) (dα / dТ = 0 at T = 480 K) decreases and the magnetoresistance disappears. The increase in the coefficient of thermal expansion with temperature may be caused by the anharmonicity of lattice vibrations as a result of electron-phonon interaction, which induces an asymmetry of the crystal field on the ion. Conclusion Thus, a magnetoresistive effect was found for Yb x Mn 1-x S with x = 0.05 and x = 0.1 at temperatures above room temperature and for x = 0.05 the magnetic capacity sign changes in the vicinity of 200 K. A decrease in the activation energy of the transition of electrons from the impurity level to the zone of IOP Publishing doi:10.1088/1757-899X/1230/1/012008 4 conductivity and mobility of charge carriers in a magnetic field was found. For a sample Yb 0.05 Mn 0.95 S, the magnetoresistive effect is 900 % at room temperature. The change of the magnetoresistance sign from negative to positive when heated in the sample region for Yb 0.1 Mn 0.9 S is established. The critical temperature above which the magnetoresistance disappears is determined. The magnetoresistive effect is explained in the model of the orbital ordering of electrons. The change in the sign of magnetoresistance versus temperature is explained by the formation of magnetic and orbital ordering at different temperatures.
1,888.6
2022-03-01T00:00:00.000
[ "Physics", "Materials Science" ]
Reduction-Induced Magnetic Behavior in LaFeO3−δ Thin Films The effect of oxygen reduction on the magnetic properties of LaFeO3−δ (LFO) thin films was studied to better understand the viability of LFO as a candidate for magnetoionic memory. Differences in the amount of oxygen lost by LFO and its magnetic behavior were observed in nominally identical LFO films grown on substrates prepared using different common methods. In an LFO film grown on as-received SrTiO3 (STO) substrate, the original perovskite film structure was preserved following reduction, and remnant magnetization was only seen at low temperatures. In a LFO film grown on annealed STO, the LFO lost significantly more oxygen and the microstructure decomposed into La- and Fe-rich regions with remnant magnetization that persisted up to room temperature. These results demonstrate an ability to access multiple, distinct magnetic states via oxygen reduction in the same starting material and suggest LFO may be a suitable materials platform for nonvolatile multistate memory. Introduction Magnetoionics are a recently introduced approach to non-volatile magnetic memory, wherein the application of a voltage across a solid or liquid dielectric medium drives ion migration (typically, H + or O 2− ) in and out of a magnetic material and induces an observable change in its properties [1][2][3][4][5][6].Magnetoionics offer some unique advantages compared to other approaches to voltage-control of magnetism in materials [7][8][9][10].For example, magnetoionics have shown reversible magnetic property switching throughout films many tens of nanometers thick [11,12], whereas purely electronic methods of modulating magnetism in oxides are often screened within a few unit cells of the surface [13].The ion migration process also drives a complex composition change that can electrically dope the system to trigger electronic phase transitions [14,15], cause structural instabilities and drive crystal phase transitions [5], and produce new chemical phases [11].Magnetoionic devices have already been built that exhibit robust cycling performance and switching speeds approaching the kHz [5,11,12,16].Most magnetoionic devices to date have functioned by switching the coercive field, magnetization, or transition temperature of the material. This study presents an early investigation into the viability of LaFeO 3−δ (LFO) thin films for magnetoionic memory.The orthoferrites are an intriguing family of candidate materials owing to their combination of fast ion transport and wide variety of Materials 2024, 17, 1188 2 of 12 magnetic properties [17,18].LFO was chosen because it is somewhat similar in chemistry to known high-temperature ion conductors like La 1−x Sr x Co 1−y Fe y O 3 (LSCFO) and Ba 1−x Sr x Co 1−y Fe y O 3 (BSCFO) [17,[19][20][21][22][23][24], but is magnetically better understood.LFO exhibits a well-characterized G-type antiferromagnetic (AFM) ground state with one of the highest ordering temperatures of any known perovskite oxide (T N ≈ 740 K) when fully oxidized [25][26][27][28][29][30].However, ferromagnetic (FM)-like behavior has been reported many times in LFO nanoparticles and thin films [31][32][33], which is often attributed to defect-or surface-related spin physics that cants the spins forming a canted AFM state (c-AFM) rather than a true FM or ferrimagnetic state.Yet some studies have suggested routes to stabilize LFO and other ferrites in a mixed Fe valence state (i.e., Fe 3+ /Fe 2+ ) [33][34][35][36], and thereby possibly drive the system into a double-exchange FM state as seen in mixed valence manganates [37] and double perovskites [38]. In this work, LFO films were reduced using a metal getter layer and thermal anneal in vacuum as a carefully controlled means of driving the oxygen migration that would be driven electrically in a device.Inspired by work highlighting the sensitivity of LFO surfaces to substrate preparation [39] and others highlighting the important role oxide substrates as sources and sinks of oxygen in oxide ionic devices [40], three common substrate preparations were used, and then LFO films and metal layers were deposited identically, and the samples annealed simultaneously to keep the film structures and oxygen migration driving force as comparable as possible between samples.The subtle change in substrate preparation led to significant differences in (1) the extent of oxygen lost by the LFO film and (2) the magnetic behavior of the reduced LFO film.Both films exhibited key characteristics of FM behavior at low temperature, including hysteresis and remnant magnetization.The results here show an ability to access two different FM-like states from the same starting materials via oxygen reduction and suggests the possibility of using LFO for nonvolatile multistate memory [41]. Sample Synthesis and Annealing A set of three identical LFO films were grown on (001)-oriented SrTiO 3 (STO) substrates from the same wafer batch (Shinkosha, Kanagawa, Japan) but prepared using different methods commonly reported in oxide film growth literature.One substrate (labeled "asreceived STO") was only degreased with acetone and ethanol in an ultrasonic bath for 5 min then dried under nitrogen gas flow prior to growth.The second substrate (labeled "DI-rinsed STO") received the same degreasing and drying as the first, followed by a rinse in DI water and redrying.The third and final substrate (labelled "annealed STO") was degreased, DI-rinsed, then annealed at 950 • C for 2 h under pure oxygen flow ramping at 5 • C/min during heating and cooling.The purpose of this substrate preparation variation was to test whether the reduction process in LFO was sensitive to the substrate preparation.For brevity, the results focus on the two end-points of this series: the samples grown on "as-received STO" and "annealed STO". After substrate preparation, the LFO film growth, metal gettering layer deposition, and oxygen gettering anneal were all performed identically, or simultaneously where possible.Next, 20 nm-thick LFO films were grown by pulsed laser deposition using a substrate temperature of 550 • C and substrate-target distance 45 mm for all samples, with a heater temperature ramp rate of 15 • C/min for both heating and cooling.An oxygen background of 0.0025 mbar (0.25 Pa) was used during heating and deposition and 100 mbar (10 kPa) during cooling.A KrF excimer laser (λ = 248 nm) with a fluence of ~2 J/cm 2 and 3 Hz pulse repetition rate was used to ablate material from a sintered stoichiometric LFO target.Following LFO deposition, all three samples simultaneously received a 10 nm Ta metal gettering layer deposited in an ultrahigh vacuum sputtering system followed immediately by a 1 h in situ anneal at 600 • C under vacuum (p < 10 −9 mbar) to drive oxygen gettering from the LFO films.The metal deposition and oxygen gettering anneal were performed in the same vacuum system without breaking vacuum. Sample Characterization Methods The substrate surfaces were characterized before and after LFO deposition using atomic force microscopy with a Veeco Nanoscope V system (Plainview, NY, USA) under ambient conditions.Crystallinity and orientation of the bare LFO films was confirmed via X-ray diffraction (XRD) on a Bruker D8 Discover system (Billerica, MA, USA) with Cu Kα radiation.Crystallinity in the annealed Ta/LFO multilayers was subsequently measured using XRD on a Rigaku SmartLab (Tokyo, Japan) with Cu Kα radiation.Characterization of atomic structure and elemental distribution were carried out using scanning transmission electron microscopy (STEM) and energy-dispersive X-ray spectroscopy (EDS).Cross-section specimens for STEM and EDS studies were made using a FEI Helios Dualbeam Nanolab 600 (Valley City, ND, USA) focused ion beam.A final cleaning cycle of the cross-section specimens was conducted at 2 keV.High-angle annular dark-field (HAADF) imaging in STEM was performed using a Themis Z (Thermo Fisher Scientific, Waltham, MA, USA) equipped with a probe aberration corrector and a four-quadrant Super-X EDS detector.The accelerating voltage of the microscope was 200 keV and the semi convergence angle was 24 mrad.EDS elemental maps were obtained with an 80 pA beam current and a pixel dwell time of 20 µs. Element-specific local structure and magnetic analyses were performed using X-ray absorption spectroscopy (XAS) and X-ray magnetic circular dichroism (XMCD) measurements collected at 45A2 at the Taiwan Photon Source National Synchrotron Radiation Research Center (Hsinchu City, Taiwan).XAS at Fe L 2,3 -edge was taken using a fixed circularly polarized X-ray with a magnetic field of ±1 T applied in the film plane.The X-rays were incident at an angle of 30 • with respect to the film surface.Temperature was set at 77 K during the collection of XAS under a total fluorescence yield (TFY) mode.The XMCD was obtained from the difference between the XAS taken with +1 T and −1 T.More XAS measurement and analysis details can be found in the Supplementary Materials.This element-specific magnetic picture was complimented by volume-averaged magnetization measurements made in a Quantum Design MPMS3 SQUID magnetometer (San Diego, CA, USA).The diamagnetic signal from the STO substrates was subtracted from the raw data by fitting the high field (3-7 T) data at 300 K. The multilayer structural and magnetic depth profiles were measured using polarized neutron reflectometry (PNR) collected at the NIST Center for Neutron Research on the Polarized Beam Reflectometer under a saturating field of 0.7 T at 30 K and 300 K and under a "near remanence" field of 5 mT (0.005 T) at 30 K. The raw PNR data were reduced by subtracting background scans from signals, accounting for polarization efficiencies, and correcting for the beam footprint in reductus [42].The reduced data from each sample was then co-refined to a depth-profile model using the Refl1D software package (v0.8.16) [43,44].The reduced data were fit to a slab-layer model of our samples defined as a scattering length density (SLD) depth profile, which can be separated into nuclear and magnetic components (nSLD and mSLD, respectively).Because only the magnetization (∝ mSLD) changes as a function of temperature, we can improve the modeling accuracy by co-refining all data sets for a given sample to a single "structural" model (nSLD) that is uniform at all temperatures while allowing only the magnitude of mSLD to vary. Results In Figure 1, the average crystal structures of the films are compared using XRD measurements taken before Ta deposition (i.e., on bare LFO films) and again after the Ta deposition plus in situ oxygen gettering anneal.Prior to Ta deposition, the LFO films appear identical with an out-of-plane (oop) pseudocubic lattice parameters of 4.03 Å, thicknesses of 21 nm, and finite-thickness Laue oscillations indicating smooth film surfaces.The absence of any peaks between the (001) and (002) reflections imply the LFO films are single phase and either single crystal or highly textured.After the Ta deposition and oxygen gettering anneal, the LFO on as-received STO (plotted in yellow throughout) still possessed a (001) pseudocubic reflection and Laue oscillations, indicating retention of the original perovskite structure and reasonably sharp interfaces.In this sample, the film peaks shifted towards the STO reflection following oxygen gettering, corresponding to a decreased oop lattice parameter of 3.97 Å and a 1.5% lattice contraction.By contrast, as shown in Figure 1b, the samples grown on DI-rinsed STO (green) and annealed STO (blue) substrates show clear reduction in the LFO film peak intensity and Laue oscillations, indicating a significant loss of long-range crystallinity and interface quality in these samples after oxygen reduction. identical with an out-of-plane (oop) pseudocubic la ice parameters of 4.03 Å, thicknesses of 21 nm, and finite-thickness Laue oscillations indicating smooth film surfaces.The absence of any peaks between the (001) and (002) reflections imply the LFO films are single phase and either single crystal or highly textured.After the Ta deposition and oxygen ge ering anneal, the LFO on as-received STO (plo ed in yellow throughout) still possessed a (001) pseudocubic reflection and Laue oscillations, indicating retention of the original perovskite structure and reasonably sharp interfaces.In this sample, the film peaks shifted towards the STO reflection following oxygen ge ering, corresponding to a decreased oop la ice parameter of 3.97 Å and a 1.5% la ice contraction.By contrast, as shown in Figure 1b, the samples grown on DI-rinsed STO (green) and annealed STO (blue) substrates show clear reduction in the LFO film peak intensity and Laue oscillations, indicating a significant loss of long-range crystallinity and interface quality in these samples after oxygen reduction.Volume-averaged magnetometry data in Figure 1c,d shows that the crystal structure difference observed in XRD following oxygen migration correlate with notable differences magnetic property differences.While as-grown LFO films on STO (001) substrates are Volume-averaged magnetometry data in Figure 1c,d shows that the crystal structure difference observed in XRD following oxygen migration correlate with notable differences magnetic property differences.While as-grown LFO films on STO (001) substrates are AFM and exhibit no hysteresis or appreciable magnetization [27,[45][46][47], under in-plane applied fields, both reduced LFO films exhibit FM-like hysteresis at low temperatures and saturating fields on the order of 300 mT.The sample grown on annealed STO, which had greater loss of crystallinity, also had a significantly higher remnant and saturation magnetization at low-temperature.The difference in magnetization between the two samples at low-temperature persists up to room-temperature.At room-temperature, the sample grown on annealed STO still exhibits a non-zero magnetization near remanence, while the sample grown on as-received STO magnetization becomes diamagnetic above about 150 K, indicating the LFO film magnetic signal has become smaller than the STO substrate contribution. The changes in film crystallinity and magnetic properties can be better understood through a high-resolution structural analysis provided by HAADF imaging in STEM. Figure 2a-c presents HAADF-STEM images of the sample grown on as-received STO.The images in Figure 2a,b show this sample contains widespread structural defects associated with the oxygen reduction process.These are detected throughout the film and increase in density towards the Ta interface, as seen as dark image contrast in Figure 2a.Despite these defects, this sample retains a chemically abrupt, coherent interface between LFO and STO as shown in Figure 2c and large volume fractions of the perovskite structure, consistent with the retention of a film peak and Laue oscillation in XRD. at low-temperature persists up to room-temperature.At room-temperature, the sample grown on annealed STO still exhibits a non-zero magnetization near remanence, while the sample grown on as-received STO magnetization becomes diamagnetic above about 150 K, indicating the LFO film magnetic signal has become smaller than the STO substrate contribution. The changes in film crystallinity and magnetic properties can be be er understood through a high-resolution structural analysis provided by HAADF imaging in STEM. Figure 2a-c presents HAADF-STEM images of the sample grown on as-received STO.The images in Figure 2a,b show this sample contains widespread structural defects associated with the oxygen reduction process.These are detected throughout the film and increase in density towards the Ta interface, as seen as dark image contrast in Figure 2a.Despite these defects, this sample retains a chemically abrupt, coherent interface between LFO and STO as shown in Figure 2c and large volume fractions of the perovskite structure, consistent with the retention of a film peak and Laue oscillation in XRD.In contrast, HAADF-STEM images and EDS elemental maps from the sample on annealed STO, shown in Figure 2d-h, show a near complete loss of the original perovskite structure.This can be seen most clearly in Figure 2d,f, which highlight that the majority of the film has a nanoscale phase-separated microstructure, seen as dark inclusions within a lighter matrix.As the intensity of HAADF-STEM images is sensitive to atomic number (Z), these images imply the lighter matrix phase is La-rich (Z = 57) while the darker inclusions are Fe-rich (Z = 26).This is further confirmed by the EDS elemental maps of La and Fe shown in Figure 2g and Figure 2h, respectively.These elemental maps clearly show the segregation of La and Fe triggered by the oxygen gettering process.Despite massive chemical and structural changes throughout most of this LFO film, there remains a thin interfacial layer of perovskite LFO at the STO interface.This layer is 1.2 nm or ~3 pseudocubic unit cells thick and can be most clearly seen in Figure 2e as a bright stripe at the top of the STO substrate.Its presence intriguingly suggests some type of interfacial effect that stabilizes this region against the loss of oxygen ions.One possible explanation is the formation of a space-charge layer at the interface [48].However, our data do not let us comment further on this phenomenon or its origin, and it is left as an open question for the community and future experiments. To gain a better understanding of these complex microstructures and how they connect to the magnetic behavior observed in each sample, the Fe local coordination environment and element-specific magnetism of our samples was measured using XAS and XMCD.A comparison of XAS from our samples to different Fe-valence standards is shown in Figure 3a,b.The absorption line shape of the sample on as-received STO in Figure 3a shows a split peak doublet structure, indicative of Fe ions in an oxygen ligand crystal field.The peak intensity ratio in the L 3 doublet is known to correlate with the Fe oxidation state [49], and our nearly equivalent peak intensities indicate an average Fe valence well below the Fe 3+ of a fully oxidized LFO film.In stark contrast, the film on annealed STO in Figure 3b shows XAS with a single peaked line shape nearly identical to that of an Fe metal standard.Similarly, the XMCD spectra from the sample on annealed STO in Figure 3c shows a line shape more consistent with pure Fe than an Fe ion in a ligand field [50,51].These XAS measurements prove that most of the Fe in the sample grown on annealed STO is in a highly reduced local environment of Fe nearest neighbors, resembling Fe metal.This is consistent with the observation of Fe-rich nanoclusters in STEM and the greater loss of crystallinity apparent in XRD.Decomposition of LFO into La-rich and Fe-rich phases is expected when the oxygen content is reduced below a critical threshold [21,52,53]. In contrast, HAADF-STEM images and EDS elemental maps from the sample on annealed STO, shown in Figure 2d-h, show a near complete loss of the original perovskite structure.This can be seen most clearly in Figure 2d,f, which highlight that the majority of the film has a nanoscale phase-separated microstructure, seen as dark inclusions within a lighter matrix.As the intensity of HAADF-STEM images is sensitive to atomic number (Z), these images imply the lighter matrix phase is La-rich (Z = 57) while the darker inclusions are Fe-rich (Z = 26).This is further confirmed by the EDS elemental maps of La and Fe shown in Figure 2g and Figure 2h, respectively.These elemental maps clearly show the segregation of La and Fe triggered by the oxygen ge ering process.Despite massive chemical and structural changes throughout most of this LFO film, there remains a thin interfacial layer of perovskite LFO at the STO interface.This layer is 1.2 nm or ~3 pseudocubic unit cells thick and can be most clearly seen in Figure 2e as a bright stripe at the top of the STO substrate.Its presence intriguingly suggests some type of interfacial effect that stabilizes this region against the loss of oxygen ions.One possible explanation is the formation of a space-charge layer at the interface [48].However, our data do not let us comment further on this phenomenon or its origin, and it is left as an open question for the community and future experiments. To gain a be er understanding of these complex microstructures and how they connect to the magnetic behavior observed in each sample, the Fe local coordination environment and element-specific magnetism of our samples was measured using XAS and XMCD.A comparison of XAS from our samples to different Fe-valence standards is shown in Figure 3a,b.The absorption line shape of the sample on as-received STO in Figure 3a shows a split peak doublet structure, indicative of Fe ions in an oxygen ligand crystal field.The peak intensity ratio in the L3 doublet is known to correlate with the Fe oxidation state [49], and our nearly equivalent peak intensities indicate an average Fe valence well below the Fe 3+ of a fully oxidized LFO film.In stark contrast, the film on annealed STO in Figure 3b shows XAS with a single peaked line shape nearly identical to that of an Fe metal standard.Similarly, the XMCD spectra from the sample on annealed STO in Figure 3c shows a line shape more consistent with pure Fe than an Fe ion in a ligand field [50,51].These XAS measurements prove that most of the Fe in the sample grown on annealed STO is in a highly reduced local environment of Fe nearest neighbors, resembling Fe metal.This is consistent with the observation of Fe-rich nanoclusters in STEM and the greater loss of crystallinity apparent in XRD.Decomposition of LFO into La-rich and Ferich phases is expected when the oxygen content is reduced below a critical threshold [21,52,53].A depth-dependent picture of the magnetism and oxygen reduction in our samples was obtained using PNR measurements.By co-refining multiple PNR data sets for each sample shown in Figure 4a, structural nSLD and magnetic mSLD depth profiles were obtained for each sample.As seen in Figure 4b, each sample has two nSLD curves describing the sample's chemical depth profile.This is because a small portion of the LFO surface (~5 mm 2 ) on each sample was shadowed during Ta deposition by mounting clips and therefore did not undergo gettering-induced oxygen loss during the vacuum anneal.Since these Ta-free regions are larger than the neutron coherence length on this instrument [54], they contribute an incoherent signal to the PNR scattering and can be modeled with a distinct depth profile, plotted as the dashed curves labeled STO|LFO.These shadowed regions do not affect any of the other measurement techniques because small, cleaved pieces of the Ta-capped regions were used for those measurements.the sample's chemical depth profile.This is because a small portion of the LFO surface (~5 mm 2 ) on each sample was shadowed during Ta deposition by mounting clips and therefore did not undergo ge ering-induced oxygen loss during the vacuum anneal.Since these Ta-free regions are larger than the neutron coherence length on this instrument [54], they contribute an incoherent signal to the PNR sca ering and can be modeled with a distinct depth profile, plo ed as the dashed curves labeled STO|LFO.These shadowed regions do not affect any of the other measurement techniques because small, cleaved pieces of the Ta-capped regions were used for those measurements.The corresponding best-fit depth-profile models resulting from co-refinement of all temperature and field conditions.The nSLD of the bilayer is shown in the solid-colored lines.The dashed colored curves show the profile of bare and fully oxidized portions of each wafer (5-10% sample area).The shaded in area between these curves is proportional to the oxygen lost from each LFO film.The 5 mT mSLD curve for the as-received STO sample was nearly zero and was omi ed from the plot for clarity.The point (at z = 160) on each mSLD shows the 95% confidence interval for magnetization at each temperature-field condition. Comparing the nSLD profiles of the Ta-capped and Ta-free regions allows for semiquantitative analysis of the oxygen lost by LFO and gained by Ta during the ge ering process.In both samples, the nSLD of the Ta-free LFO is larger than the Ta-capped LFO.Assuming only oxygen is migrating at the ge ering anneal temperature, which is the case for bulk STO at these temperatures [55], then the change in nSLD directly reflects the change in average LFO oxygen stoichiometry.In this case, the filled areas between the nSLD curves are proportional to the total number of oxygen ions lost by each film.Clearly, the refined PNR profiles indicate that more oxygen was lost from the LFO film grown on annealed STO than when grown on as-received STO.This result is corroborated by the refined nSLDs in the TaOx layers, in Figure 4b, and the HAADF-STEM intensity in the TaOx layers, in Figure 2a,d, both of which indicate the TaOx layer on annealed STO ge ered more oxygen from LFO. Several models were considered and tested to determine the best description of the magnetic depth profile in each sample.A combination of uniform magnetization profiles in the Ta-capped LFO profile and mSLD fixed to zero in the Ta-free profile were found to be the best fit to the PNR data.In agreement with our magnetometry and XMCD data, the sample grown on annealed STO was refined to have larger magnetization at all Comparing the nSLD profiles of the Ta-capped and Ta-free regions allows for semiquantitative analysis of the oxygen lost by LFO and gained by Ta during the gettering process.In both samples, the nSLD of the Ta-free LFO is larger than the Ta-capped LFO.Assuming only oxygen is migrating at the gettering anneal temperature, which is the case for bulk STO at these temperatures [55], then the change in nSLD directly reflects the change in average LFO oxygen stoichiometry.In this case, the filled areas between the nSLD curves are proportional to the total number of oxygen ions lost by each film.Clearly, the refined PNR profiles indicate that more oxygen was lost from the LFO film grown on annealed STO than when grown on as-received STO.This result is corroborated by the refined nSLDs in the TaO x layers, in Figure 4b, and the HAADF-STEM intensity in the TaO x layers, in Figure 2a,d, both of which indicate the TaO x layer on annealed STO gettered more oxygen from LFO. Several models were considered and tested to determine the best description of the magnetic depth profile in each sample.A combination of uniform magnetization profiles in the Ta-capped LFO profile and mSLD fixed to zero in the Ta-free profile were found to be the best fit to the PNR data.In agreement with our magnetometry and XMCD data, the sample grown on annealed STO was refined to have larger magnetization at all temperatures and have a non-zero magnetization at room temperature.It is important to note that the uniform magnetic profiles refined from PNR are in fact consistent with the inhomogeneities seen in STEM.This is because the inhomogeneities are several orders of magnitude smaller than the neutron coherence length.In this case, thousands of inhomogeneities are averaged over laterally in each scattering event.Therefore, the uniformity in the magnetic depth profiles is an indication that the distribution of magnetic inhomogeneities in LFO are approximately uniform along the growth direction, which agrees reasonably well with the STEM images in Figure 2. Discussion The data presented here from multiple complimentary techniques create a clear and consistent picture of the differences between our samples' microstructure and oxygen reduction.Specifically, our results show that the LFO film grown on annealed STO underwent greater oxygen reduction than the film on the as-received STO substrate.This is most strongly supported by the PNR depth-profile refinements showing greater change in the LFO and Ta layer compositions after oxygen migration, but it is also supported by the greater loss of crystallinity and more reduced Fe valence state observed in our XRD and XAS measurements.As a result of this greater reduction, that LFO film underwent phase segregated into metal Fe nanoclusters surrounded by a disordered, La-rich matrix as confirmed by XAS and STEM-EDS.The presence of Fe metal and likely other Fe-rich decomposition products (e.g., Fe 3 O 4 , Fe 2 O 3 ) in this sample can explain the more robust FM-like behavior.In contrast, the LFO film grown on as-received STO lost less oxygen during the gettering anneal and consequently maintained a large volume fraction of the initial perovskite structure.This is proven by the presence of a film diffraction peak in XRD and multiplet splitting of the XAS signal.The smaller magnetization values in this sample, and the fact that magnetic hysteresis and remnant magnetization only appear at low temperature, are consistent with previous reports of point-defect-induced canting of the parent G-type AFM structures and suggest similar physics here [32][33][34].The complexity of the reduced LFO film microstructures, coupled with the dependence of Fe and Fe oxide magnetic properties on nanoparticle dimensions and interfaces, makes a deeper analysis of the magnetic ground states challenging.Towards that end, a brief comparison of in-plane and out-of-plane magnetic behavior is provided in the Supplementary Materials, but future studies that include micromagnetic modeling are needed to fully appreciate how the magnetic properties observed here are derived from these structures. While it is clear that different oxygen reduction occurred and led to the formation of different microstructures and magnetic properties, the question of why different substrate preparations caused these differences cannot be answered by the methods used here.One possibility suggested in the literature is a difference in the width of surface terraces between the substrates.Wider terraces have been suggested to create a lower energy barrier for oxygen ion migration across the film/substrate interface [39].Within this hypothesis, one expects films grown on substrates with wider terraces to undergo less net oxygen reduction during gettering because faster ion transport across the substrate interface can better replenish oxygen lost by the film to the metal capping layer.AFM measurements of the STO substrate surfaces before and after pre-growth treatments (see Supplementary Materials) show the annealed STO substrate had wider terraces before and after annealing than the other samples.However, the measurements presented here show that LFO grown on annealed STO underwent greater net oxygen reduction and conflict with this surfaceterrace-based hypothesis.An alternative hypothesis is that annealing STO in oxygenrich environments increases the oxygen concentration in the first few nanometers of the substrate and thereby causes a sharp drop in oxygen diffusivity in this region since oxygen diffusion is vacancy-mediated in STO and most perovskites [55][56][57].This hypothesis agrees better with data presented here, in particular the greater reduction observed in LFO grown on annealed STO and the presence of oxygen-rich layers at the LFO/STO interface in both samples seen by STEM. Although the reason remains to be determined, a key and clear result from this work is that LFO films can be driven via oxygen-migration and reduction to at least two magnetic ground states with different FM-like behavior.Materials that exhibit multiple unique magnetic states are current sought after for multi-state memory, where memory can be stored in significantly greater areal bit density than current binary memory technologies.Thus, the results here raise the question of whether LFO could be a platform for multi-state, magnetoionic memory.However, the results also suggest that the reduction process is extremely sensitive, and even small differences in the substrate preparation that are sometimes overlooked in experimental design can change the amount of oxygen lost from overlying films. Figure 1 . Figure 1.Average structure and magnetic property characterization.(a) Coupled 2θ-ω X-ray diffraction scans showing the (001) and (002) reflections taken before the Ta deposition (top) and after the ge ering anneal (bo om).The as-received XRD data has been shifted vertically for visibility.(b) Comparison of the (001) reflection after Ta deposition and ge ering annealing.(c) Field-dependent and (d) temperature-dependent magnetization of the samples on "as-received STO" and "annealed STO" with the applied field in the film plane. 1 emu/cm 3 = 1 kA/m. Figure 1 . Figure 1.Average structure and magnetic property characterization.(a) Coupled 2θ-ω X-ray diffraction scans showing the (001) and (002) reflections taken before the Ta deposition (top) and after the gettering anneal (bottom).The as-received XRD data has been shifted vertically for visibility.(b) Comparison of the (001) reflection after Ta deposition and gettering annealing.(c) Field-dependent and (d) temperature-dependent magnetization of the samples on "as-received STO" and "annealed STO" with the applied field in the film plane. 1 emu/cm 3 = 1 kA/m. Figure 2 . Figure 2. HAADF-STEM images of TaOx/LFO heterostructure grown on an (a-c) as-received and (d-h) annealed STO substrates.(b,e) Magnified images near the LFO/STO interface from the locations in (a,d) indicated by the orange and blue boxes, respectively.(c) High-magnification HAADF-STEM image from (b) marked by red do ed box, showing the film/substrate interface with cations Figure 2 . Figure 2. HAADF-STEM images of TaO x /LFO heterostructure grown on an (a-c) as-received and (d-h) annealed STO substrates.(b,e) Magnified images near the LFO/STO interface from the locations in (a,d) indicated by the orange and blue boxes, respectively.(c) High-magnification HAADF-STEM image from (b) marked by red dotted box, showing the film/substrate interface with cations overlayed.(f) Magnified HAADF-STEM image from the yellow box in (d), and corresponding EDS elemental maps of cations, (g) La and (h) Fe. Figure 3 . Figure 3. Local structural and magnetization measurements on the oxygen-ge ered films.(a,b) Xray absorption spectra compared against Fe and Fe2O3 standards.(c) XMCD at 300 K. Figure 3 . Figure 3. Local structural and magnetization measurements on the oxygen-gettered films.(a,b) X-ray absorption spectra compared against Fe and Fe 2 O 3 standards.(c) XMCD at 300 K. Figure 4 . Figure 4. Polarized neutron reflectometry data showing a depth-resolved picture of the oxygen migration differences.(a) PNR data plo ed as spin-asymmetry for three different temperature and field conditions.Error bars are 1σ.(b)The corresponding best-fit depth-profile models resulting from co-refinement of all temperature and field conditions.The nSLD of the bilayer is shown in the solid-colored lines.The dashed colored curves show the profile of bare and fully oxidized portions of each wafer (5-10% sample area).The shaded in area between these curves is proportional to the oxygen lost from each LFO film.The 5 mT mSLD curve for the as-received STO sample was nearly zero and was omi ed from the plot for clarity.The point (at z = 160) on each mSLD shows the 95% confidence interval for magnetization at each temperature-field condition. Figure 4 . Figure 4. Polarized neutron reflectometry data showing a depth-resolved picture of the oxygen migration differences.(a) PNR data plotted as spin-asymmetry for three different temperature and field conditions.Error bars are 1σ.(b)The corresponding best-fit depth-profile models resulting from co-refinement of all temperature and field conditions.The nSLD of the bilayer is shown in the solid-colored lines.The dashed colored curves show the profile of bare and fully oxidized portions of each wafer (5-10% sample area).The shaded in area between these curves is proportional to the oxygen lost from each LFO film.The 5 mT mSLD curve for the as-received STO sample was nearly zero and was omitted from the plot for clarity.The point (at z = 160) on each mSLD shows the 95% confidence interval for magnetization at each temperature-field condition.
7,833
2024-03-01T00:00:00.000
[ "Materials Science", "Physics" ]
AMS-3000 LARGE FIELD VIEW AERIAL MAPPING SYSTEM: BASIC PRINCIPLES AND THE WORKFLOW Three-line array stereo aerial survey camera is a typical mapping equipment of aerial photogrammetry. As one of the airborne equipment, it can quickly obtain a large range of basic geographic information with high precision. At present, typical three-line array stereoscopic aerial survey cameras, such as Leica ADS40 and 80, have the disadvantages of small field of view and low resolution, which makes it difficult to meet the demand of large-scale topographic mapping for economic construction. For the urgent need of domestic three linear array aerial mapping camera in our project, we developed the AMS-3000 camera system. Camera features include a large field of view, high resolution, low distortion and high environmental adaptability. The AMS-3000 system has reached the international advanced level on both software and hardware aspects. * Corresponding author INTRODUCTION Three-line array stereo airborne mapping camera can quickly acquire a wide range of high-resolution basic geographic information, which plays a significant supporting role in digital city construction, earthquake relief, resource navigation, environmental protection, homeland security and other works. The airborne digital camera first appeared in the international society for photogrammetry and remote sensing (ISPRS) conference in 2000 (Eckardt et al., 2000). Three-line array stereo airborne mapping camera is an upgrade product of aerial digital camera. Because of its brilliant performance, it has been valued by many countries and has become a research hotspot of aerial photogrammetry (Pechatnikov et al., 2008). Digital aerial mapping cameras are mainly divided into two categories, one mainly using area-array detectors, the other mainly using linear array detectors (Yao et al., 2018). Both of them have their own advantages and disadvantages (Lin et al., 2019, Zhang et al., 2016. The area-array aerial mapping camera has small pixels and high precision, but its base-height ratio is relatively small, and the elevation accuracy is not as good as that of the linear array aerial survey camera. Moreover, when photographing a large area with high overlap rate, the number of image files is huge, resulting in a long processing time. The linear array aerial camera does not need image splicing. And without a shutter, the linear array camera is more stable to produce more uniform images (Cao et al., 2019). However, the linear array camera is affected by the flight attitude, so a high-precision POS system is necessary (He et al., 2015, Yin et al., 2016. At present, the well-known stereo airborne mapping cameras include the ADS80, ADS100 of Leica company (linear array detectors), DMC camera of Z/I company and the SWDC camera (area-array detectors) developed by Chinese research institute of surveying and mapping (Boesch et al., 2016). Recent years, the digital aerial mapping camera in China has a preliminary development, but in the degree of automation and accuracy of aerial survey, there is still a significant gap with other high quality cameras. In addition, it is mainly used for area-array aerial mapping cameras, and the technology development is relatively slow. Compared with foreign area-array stereoscopic cameras, such as the DMC of Z/I company, there is still a gap in some key indicators. Imported cameras are expensive and still have some shortcomings, including difficulties in mapping of small and medium-sized scales, low resolution, small working width and low operating efficiency. These aerial mapping cameras have solved part of the demand of the market to some extent, but they are still far from meeting the demand of China's rapid economic construction for a large amount of aerial remote sensing surveying and mapping geographic information. Therefore, it is very necessary to develop three-line array stereo airborne mapping cameras with high resolution and large field of view to meet the needs of mapping of large scales (Lu et al., 2016). Based on the above requirements, AMS-3000 camera system was developed. AMS-3000 large field of view three-line array stereoscopic aerial photography system is a new generation of aerial digital photography system developed by China Academy of Science and Wuhan university. Mounted on a general aviation platform, AMS-3000 can quickly and flexibly acquire high-resolution ground images to achieve 1:1000 large-scale high-precision mapping. The key technology of AMS-3000 is significantly superior to the most advanced ADS40/80 in the world, and it has the high performance to support the economic construction of China. HARDWARE AMS-3000, a three-line array stereo airborne mapping camera with a large field of view, adopts the push-sweep imaging principle. The focal plane of the camera is equipped with multiple high-resolution panchromatic and RGB band line-array detector arrays, which constitute the forward, nadir and backward imaging of the ground. AMS-3000 can simultaneously obtain fully overlapping panchromatic PAN band, red single band, green single band and blue single band images, which can directly generate multiple stereo image pairs. At the same time, three-line array stereo airborne mapping camera in large field of view records multi-band images of ground scenes, which can be used for the synthesis of color and false color images. Figure 1, three-line array surveying and mapping camera has three detectors, which are projected on the ground through the optical system. In the process of photogrammetry, if we know the exterior orientation elements (i.e. the position and attitude angle of the camera in the earth coordinate system) and the interior orientation elements (i.e. the main distance, the position of the main point and the intersection angle) of the three-line array cameras at each scanning time, then the image coordinates of any object point on the ground at three different times on the three-line array detectors can be completely determined. On the other hand, if the coordinate of the image point of the corresponding object is calculated, the coordinate of the object point can also be calculated. RGB detector The three-line array aerial mapping camera with large field of view adopts a long focal length, single lens transmission optical system and multiple detectors with different angles arranged on the focal plane. Six linear array detectors are mounted on the optical focal plane of the three linear array aerial mapping camera, among which three are panchromatic linear array detectors and the other are R, G and B linear array detectors. These detectors are in parallel with each other and perpendicular to the flight direction. When the camera works, each detector continuously scans the ground in a synchronous period and produces six overlapping strip images, forming panchromatic and multispectral images. The main technical parameters of AMS-3000 are excellent, which reaches high performance. The AMS-3000 has a volume of 500mm * 500mm * 915mm and a weight of 72kg. Its working temperature is between -20℃ to 60℃. These parameters completely meet common aerial photogrammetry usage scenarios. The camera has a storage capacity of 6TB, which completely meets the data acquisition of large area photographing tasks. With a scan line width of 32,000 pixels and a base height ratio of 0.89, AMS-3000 supports panchromatic image and RGB color image photographing, which is significantly larger than the mainstream aerial photography cameras. At the flight height of 2400m, the camera can cover 3000m in a single flight band and acquire 0.1m resolution remote sensing image of 180 square kilometers in 10 minutes, which has very high data acquisition efficiency. The main features of AMS-3000 camera include: -High resolution panchromatic and RGB images obtained at the same time, -Telecentric lens with high transfer function, lowdistortion and large FOV, -High working efficiency with 32000 pixels linear array detectors, -High precision GPS and IMU integrated, -Large base to height ratio and strong stereo imaging ability, -Active temperature control system and airtight design to guarantee environment adaptability, -Automatic data processing system. With its unique features, it is designed to meet the needs of worldwide enormous demand with higher image quality, strong stereo imaging ability, high precision GPS and IMU integrated, high working efficiency and reduced costs. AMS-3000 has powerful performance, as shown in Table 1 and Table 2. Compared with the mainstream aerial cameras in the world, its core performance indexes, such as focal length, coverage width (m) at the same resolution (0.1m), have exceeded. Table 2 The parameter comparison of ADS100 and AMS-3000 SOFTWARE The DPGridAMS software was developed by a research team led by Duan Yansong of the school of remote sensing information engineering, Wuhan University. Direct input of the original data obtained by AMS-3000 digital aerial photography, through automatic aerial triangulation, automatic production (DEM), large-scale image color mixing and other process, DEM, DOM, DLG, 3d model data and other mapping products can be produced. DPGridAMS is a photogrammetric system specially developed by Wuhan University to process image data of domestic three -line array cameras. It is an important part of DPGrid series software of Wuhan University. Consistent with the DPGrid design idea, DPGridAMS is also a distributed processing system based on cluster computers, which can run on either ordinary microcomputer networks or high performance cluster computers (blade servers). The characteristics and features of DPGridAMS include: • Based on the distributed network parallel process, and the processing performance is superior, The block adjustment method is adopted to output the corrected external orientation element, • DEM is generated based on T0 level image and L1 level image matching, • Generate orthophoto images. The software can be widely used in basic mapping, urban planning, land resources, satellite remote sensing, military measurement, highway, railway, water conservancy, electric power, environmental protection, agriculture and many other fields of digital city projects. Figure 3. The software main interface of DPGridAMS With AMS-3000 system, we tested the hardware and software performance of the camera in Yangjiang city, Guangdong province, China. In this production test, the Y-12 type fixed-wing aircraft was used to fly 5 sorties with AMS-3000 digital aerial camera. The effective area obtained was over 1000 square kilometers, and the ground resolution was about 0.1 meter (2000m flight height). The experimental area has many land types, namely, water, residential, forest and coast. The CCD of three bands of AMS-3000 camera is placed on the focal plane with a distance of 10 microns, and the calibrated panchromatic image is used to automatically calibrate the multi-spectral camera. The calibration parameters of RGB image can be obtained, and the three bands of RGB can be registered to within 2.5 microns through the calibration parameters. The design length of CCD for each row is 163.78mm, and the calibrated average length is 164.512mm. The design values of the front and rear CCD viewing angles of the AMS-3000 are 21 degrees and 27 degrees, respectively. The mean viewing angle before calibration is 21.12 degrees, and the mean viewing angle after calibration is 27.21 degrees. The intersection angle during stereo measurement is 48.33 degrees, and the base-to-height ratio of photogrammetry is 0.95 degrees. The main operating environment for this software is common Windows systems. According to the characteristics of AMS-3000 camera, we refer to the processing flow of ADS80, and redesign the processing flow. The new designed process is more automated and efficient and works well in the AMS-3000 camera. Figure 4 shows the whole advanced workflow of DPGridAMS software. Figure 4. The work flow of DPGridAMS This system design provides a highly efficient workflow that is able to process large areas that consist of hundreds of thousands of image frames. The improved workflow can better adapt to the original data captured by AMS-3000 camera and improve the efficiency of the whole process. The original camera data, after the whole process above, can get the mapping results of DEM, DOM, and DLG. The ground processing system of AMS-3000 camera starts from receiving the original data, and carries out radiation calibration and aircraft belt splicing to output level 0 images. Then, it can output level 1 images geometrically according to POS data; Finally, color orthophoto images (level 2 images) with geographic information can be generated according to the processing results of automatic matching and color registration. The data is in standard .tiff and geotif formats, and other related parameters are in .odf format and corresponding ellipsoid parameters (XML format), which can be directly provided to users in various industries. Through aerial photography experiments in different regions, a total of over 100 flight belts were shot, with the image data volume exceeding 20TB. Among these experiments, AMS-3000 system is stable and reliable with excellent performance. Each scanline of the camera has a width of 32,000 pixels. At an altitude of 2,000 meters, the AMS-3000 camera can acquire remote sensing images of 180 square kilometers with a resolution of 0.1 meters every 10 minutes, which has a very high data acquisition efficiency. It has successfully completed the tests in Yangjiang test area of Guangdong Province and Jiamusi test area of Heilongjiang Province in China, and the software and hardware level has reached the expected target. The test results (Yangjiang test area of Guangdong Province and Jiamusi test area of Heilongjiang Province in China) show that the working efficiency and imaging quality of the whole system are excellent, and the accuracy of the images results can meet the requirements of 1:1000 production, which marks the successful development of the first three-line array high-resolution ground data acquisition and processing system in China. Figure 5. The image results of DPGridAMS Figure 5 is part of the result data of AMS-3000 system after flying in Yangjiang test area of Guangdong Province and Jiamusi test area of Heilongjiang Province. Figure 5 shows the orthophoto generated by the DPGridAMS software. It can be seen that the color of the generated orthophoto image is real without color deviation, and the transition of adjacent areas between different images is quite natural without any traces of seam line. Moreover, the radiative resolution of the image is high and the details are very rich. When the image is enlarged, you can clearly see details of vehicles, zebra crossings, trees and buildings on the street. Figure 6. The 3D results of DPGridAMS Figure 6 shows the 3D models results of DPGridAMS software of Yangjiang test area. Because AMS-3000 camera has high resolution and accurate positioning, the 3D models produced from these images are also exquisite. The entire model has a large enough scope that the details of each building are fully presented. And there is no evident dislocation in the large 3D model. CONCLUSION This paper introduces a new generation of three-line array stereo airborne mapping camera AMS-3000 and the supporting software processing system DPGridAMS. The system adopts a new optical design to solve the restriction between large field of view and high resolution. The most important features are: • Large field view, • High resolution, • Low distortion, • Large FOV, • High precision GPS and IMU integrated, • Hardware and software integration, • Distributed software operating system with high processing efficiency,
3,413.2
2020-08-06T00:00:00.000
[ "Computer Science" ]
Non compact boundaries of complex analytic varieties in Hilbert spaces We treat the boundary problem for complex varieties with isolated singularities, of complex dimension greater than or equal to 3, non necessarily compact, which are contained in strongly convex, open subsets of a complex Hilbert space H. We deal with the problem by cutting with a family of complex hyperplanes in the fashion of [2] and applying the first named author's result for the compact case [13]. Introduction Let M be a smooth and oriented (2m+1)-dimensional real submanifold of some complex manifold X. A natural question arises, whether M is the boundary of an (m + 1)-dimensional complex analytic subvariety of X. This problem, the so-called boundary problem, has been widely treated over the past fifty-five years when M is compact and X is C n or CP n . For a review of the boundary problem see [17], chapter 6. The case when M is a compact, connected curve in X = C n (m = 0), has been first solved by Wermer [19] in 1958. In 1966, Stolzenberg [18] proved the same result when M is a union of smooth curves. Later on, in 1975, Harvey and Lawson in [10] and [11] solved the boundary problem in C n and then in CP n \ CP r , in terms of holomorphic chains, for any m. The boundary problem in CP n was studied by Dolbeault and Henkin, in [7] for m = 0 and in [8] for any m. Moreover, in these two papers the boundary problem is dealt with also for closed submanifolds (with negligible singularities) contained in q-concave (i.e. union of CP q 's) open subsets of CP n . This allows M to be non compact. The results in [7] and [8] were extended by Dinh in [6]. The main theorem proved by Harvey and Lawson in [10] is that if M ⊂ C n is compact and maximally complex then M is the boundary of a unique holomorphic chain of finite mass [10,Theorem 8.1]. Moreover, if M is contained in the boundary bΩ of a strictly pseudoconvex domain Ω, then M is the boundary of a complex analytic subvariety of Ω, with isolated singularities [12] (see also [9]). In [4] Della Sala and the second named author generalized this last theorem to a non compact, connected, closed and maximally complex submanifold M (of real dimension at least 3, i.e. m ≥ 1) of the connected boundary bΩ of an unbounded weakly pseudoconvex domain Ω ⊂ C n . The extension result is obtained via a method of "cut-extendand-paste". In [15] the first named author established the Harvey-Lawson theorem for maximally complex manifolds of real dimension at least 3 (m ≥ 1) contained in a complex Hilbert space, under the addition of a technical hypothesis. The aim of this paper is to combine the techniques of these last papers, in order to generalize the extension result to a non necessarily bounded, connected, closed and maximally complex submanifold M (dim R M ≥ 5, i.e. m ≥ 2) of the connected boundary bΩ of a strongly convex unbounded domain Ω of a complex Hilbert space H. The precise definitions will be given in the following section. The main theorem we establish is the following: Theorem 1.1. Let H be a complex Hilbert space, and M ⊂ H such that (i) M is a smooth maximally complex manifold of real dimension 2m + 1 ≥ 5 (complex dimension m ≥ 2); where Ω is a strongly convex domain; (iii) there exists an orthogonal decomposition H = C m+1 × H ′ such that the orthogonal projection p : H → C m+1 , when restricted to M , is a closed immersion with transverse self-intersections; (iv) M is quasi-locally compact. Then there exists a unique analytic chain of finite dimension T in Ω with isolated singularities, such that the boundary of T is M . As already mentioned, the case when Ω is bounded was treated in [15]; therefore we will always suppose Ω to be unbounded. The strategy behind the proof of Theorem 1.1 is similar to that used in [4] and it is actually a simplification of that one. First we get a local and semi-global extension (see section 3), through an Lewy-type extension theorem for Hilbert-valued CR-functions. Then we cut Ω with parallel complex-hyperplanes. Hypotheses (ii) and (iv) are technicalities needed in order to assure that the slices of M are compact, so that we can apply the extension result in [15] (the slices of M are maximally complex, and property (iii) is inherited by the hyperplane). The high dimension of M is needed in order to get the maximal complexity of slices (in an Hilbert space a moments condition makes less sense than in C n ). We give a simple example (see example 4.1) showing that relaxing hypothesis (ii) can lead to the slices of Ω (thus of M ) being unbounded. On the other hand, hypothesis (iv) is unnecessary (because always satisfied) if the following topological conjecture (by Williamson and Janos, 1987 [20]) is true. Using the conjecture of Williamson and Janos, we can get rid of one annoying technical hypothesis: Let H be a complex Hilbert space, and M ⊂ H such that (i) M is a smooth maximally complex manifold of real dimension 2m + 1 ≥ 5 (complex dimension m ≥ 2); where Ω is a strongly convex domain; (iii) there exists an orthogonal decomposition H = C m+1 × H ′ such that orthogonal the projection p : H → C m+1 , when restricted to M , is a closed immersion with transverse self-intersections. Then there exists a unique analytic chain of finite dimension T in Ω with isolated singularities, such that the boundary of T is M . Notations and definitions In the following, we denote by H a complex Hilbert space and by B(x, ρ) the (open) ball of center x ∈ H and radius ρ > 0. We introduce the following quasi-local property (following the terminology of [14]). Definition 2.1. We say that K ⊂ H is quasi-locally compact if, for any x ∈ H, for any ρ > 0, the set B(x, ρ) ∩ K is relatively compact in H. Definition 2.2. Given an open set Ω ⊂ H with smooth boundary, we call it strongly convex at x ∈ bΩ if the Hessian form of the boundary at x satisfies Hess x (·, ·) ≥ ε · 2 for some fixed ε > 0. We call Ω ⊂ H strongly convex if it is strongly convex at all its boundary points. Let x ∈ bΩ be a point of strong convexity, then for every 2dimensional real plane P containing the normal to bΩ at x the set P ∩ Ω is a convex set locally (around x) contained in a parabola. Considering the cone delimited by two tangents lines to such a parabola, symmetric with respect to its axis and close enough to x, we note that P ∩ Ω lies inside such a cone, by convexity; the angle of such a cone depends only on the ε in the definition of strong convexity. This holding for for every plane P and the angle of the cone not depending on P , we have that Ω is contained in a cone of H with fixed angle. Given a real hyperplane L intersecting the cone in a bounded set, all its translations along the axis of the cone will intersect the cone in bounded sets, therefore if L intersects Ω just in x and is tangent to bΩ, then all its translations alond the axis of the cone will intersect Ω in bounded sets. Moreover, if Ω is unbounded, ν is the unit vector pointing in the direction of the axis of the cone (with the correct orientation) and Therefore Ω is contained in the intersection between the cone constructed above and {L < C 1 }; such an intersection is bounded, which is an absurd. See [15] for some examples and a discussion of the relations between this definitions and the others that can be found in the literature. In this paper, M will denote a smooth finite-dimensional manifold in H, of real dimension 2m + 1 greater than or equal to 5 and p will always be the projection whose existence is required in the third hypothesis of Theorem 1.1. H x (M ) will be the holomorphic tangent to M will also be required to be maximally complex, i.e. such that dim C H x (M ) = m at all points x ∈ M , since maximal complexity is a necessary condition for being the boundary of a complex variety. Given a smooth real hypersurface S in H, we denote by L x (S) the Levi form of S at the point x; we note that, if S is the boundary of a strongly convex open set Ω, then L x (S) is positive definite for every x ∈ S, i.e. a strongly convex open set is strongly pseudoconvex. The local and semi-global results The aim of this section is to prove the local result. Let 0 be a point of M ⊂ S. We have the following inclusions of tangent spaces: Proof. We will reduce problem to the finite dimensional case proved in [4] (Lemma 3.1; for a different proof see also Lemma 2.3.1 in [3]). Let us consider the following orthogonal decomposition of T 0 (H): The following lemma is an immediate consequence of a well-known fact. wheref is the unique extension of f . Proof. Let f ∈ A(D). Let x be any point in D ′ . We can define wheref is the unique extension of f . χ x is a character of the Banach algebra A(D), therefore continuous of unitary norm. Thus Hence the thesis. The typical setting in which the previous lemma applies is when one is concerned with extension of analytic functions. M over a C m+1 in such a way that the projection π : H → C m+1 is a local embedding of M near 0: since the restriction of π to M is a CR function, and since the Levi form of M has -by the arguments stated above -at least one positive eigenvalue, it follows that the Levi form of π(M ) has at least one positive eigenvalue. Thus, in order to obtain W 0 , it is sufficient to apply the Lewy extension theorem [13] to the CR function π −1 | π(M) . In order to ensure that the extension lies in the Hilbert space H, we consider the orthonormal decomposition found before H = C m+1 × H ′ . Let e j be a complex base of H ′ , and π −1 j (M ) the e j coordinate of π −1 | π(M) . We can apply Lewy's theorem to extend all of the functions π −1 j to a fixed one-sided open neighbourhood U of 0 ∈ π(M ); let us provisionaly denote by p j the extension of π −1 j . For any positive integer k, for any k−tuple (i 1 , . . . , i k ) and for any a ∈ C k , the scalar function Therefore, by Lemma 3.2, we know that . Let z 0 ∈ U and take a j = p ij (z 0 ) for j = 1, . . . , k. Then and, letting z 0 vary in U , we have . This implies that, if the sequence of the partial sums of i e i π −1 i is a Cauchy sequence on π(M ) with respect to the supremum norm, then the same holds true for the sequence of partial sums of i e i p i on U with respect to the supremum norm, implying the convergence of the latter to a holomorphic map from U to H ′ . As for the second statement, we observe that the projection by π of the normal vector of S pointing towards Ω lies into the domain of C m+1 where the above extension W 0 is defined. Indeed, the extension result in [13] gives a holomorphic function in the connected component of (a neighborhood of 0 in) H \π(M ) for which L 0 (π(M )) has a positive eigenvalue, when π(M ) is oriented as the boundary of this component. This is precisely the component towards which the projection of the normal vector of S points, when the orientations of S and M are chosen accordingly. This fact, combined with Lemma 3.1 (which states that any extension of M must be transverse to S) implies that locally W 0 ⊂ Ω ∩ U . Since M is quasi-locally compact, we can cover M with countable many such open sets U i , and consider the union W 0 = ∪ i W i . W 0 is contained in the union of the U i 's, hence we may restrict it to a tubular neighborhood I M of M . It is easy to extend I M to a tubular neighborhood The global result In this section we will prove Theorem 1.1 in the case when Ω is unbounded. Since Ω is strongly convex, by the discussion following Definitions 2.1 and 2.2, we can find a real hyperplane with λ a complex linear functional, tangent to bΩ in 0, such that, for every translation I a = {Re λ = a}, a ∈ R + of I, I a ∩ Ω is bounded (and not empty) and the same holds for nearby hyperplanes. Denoting by L k = {λ = k}, k ∈ C, Re k ∈ R + , also A k = L k ∩ M is bounded and, by the quasi-local compactness of M , the slice A k of M is compact. In view of Sard's lemma, up to modifying the equation of L k by means of another complex linear functional µ L k = {λ + εµ = k} we can suppose the slice A k to be smooth and a transversal intersection, hence of the correct dimension (2m − 1). As a notation, we'll call suitable a slicing hyperplane L k that leads to a smooth, compact, transversal intersection, as above. Thanks to the maximal complexity of M , it follows that each slice A k is maximally complex too. Moreover, the technical hypothesis (iii) of Theorem 1.1 is inherited by the slice. Fix a point in Ω. To this correspond a suitable slicing hyperplane L k0 of the above form, such that nearby parallel hyperplanes are suitable too. Each slice A k , k in a neighborhood U of k 0 , satisfies the hypotheses of the theorem in [15], thus is the boundary of a holomorphic chainà k with support in the hyperplane L k , which is a smooth manifold near bΩ, since there it coincides with the manifold obtained in Corollary 3.5. Our goal is now to glue together the slicesà k :à U = ∪ k∈UÃk and show thatà U is a holomorphic chain, too (without singularities near bΩ, due to Corollary 3.5). It is worth observing that a strictly convexity hypothesis does not suffice to use our slicing method, as the following example shows. Then Ω is strictly convex (i.e. convex and its boundary does not contain lines or line segments). But it is not strongly convex at 0 ∈ bΩ. Observe that the real tangent hyperplane is such that all its translated in the positive x 0 direction intersect Ω and bΩ in unbounded sets. That is also true for complex hyperplanes of the form Hence it is not possible to apply out slicing method to a maximally complex manifold M ⊂ bΩ, since we have no way to assure even the boundedness of the slice. If we show that A U = k∈U A k is an analytic space, the thesis will follow. By [15,Remark 5.4], A U is a continuous family in the parameter k, therefore it is a rectifiable set of real dimension 2m + 2. We denote by [ A U ] the current of integration associated to it and we define the map κ : A U → U ⊂ C such that x ∈ A κ(x) for every x ∈ A U ; the map κ is Lipschitz-continuous, therefore by the Coarea formula in [1,Theorem 9.4]. The previous formula implies that the current [ A U ] is of bidimension (m+1, m+1), which is equivalent to the fact that Tan (2m+2) ( A U , x) = V x is a complex subspace for H 2m+2 −a.e. x ∈ A U ; moreover, κ is the restriction to A U of a C-linear map f : H → C 2 , therefore d AU κ x = df | Vx . By formula (9.2) in [1] and the properties of C−linear maps, we get This implies that [ A U ] is a positive current. The topological boundary of A U is given by the union and therefore is again a rectifiable set, this time of dimension 2m + 1. The boundary of the current [ A U ] is concentrated on such a set and is therefore not contained in the bounded open set Summing up, [ A U ] is a (2m + 2)−rectifiable current, which is positive and closed in Ω U ; therefore, by [15,Theorem 4.5], [ A U ] can be represented by integration on the regular part of an analytic set. We denote such a set by V . Let us consider the projection p : H → C m+1 , which is an immersion with self-transverse intersections when restricted to M , and let us suppose that, for some open set Ω U , we can find a linear functional ν : π(Ω U ) → C such that p(A k ) = ν −1 (k) ∩ p(M ) for every k ∈ U and such that p| A k is again an immersion with self-transverse intersections. We can always find such a ν, up to shrinking U ; we can also restrict U further, so that every connected component U j of π(Ω U ) \ p(M ) intersects ν −1 (k) in a non empty set for every k ∈ U . Going through the proof of Theorem 5.6 in [15], we can construct holomorphic functions which realize A k as their graph. What we proved before is that is a holomorphic function whenever (z, F h j,k (z)) belongs to ( A k ) reg ; therefore, we have an analytic set S j,k ⊂ U j ∩ ν −1 (k) outside which the dependence from k is analytic. Let us denote by S j = k S j,k . By an easy coarea argument, we observe that H 2m+1 (S j ) = 0. Finally, the functions F h j are bounded on U j because their images are contained in any ball which contains Ω U ∩ M , which is bounded. Therefore, we can extend the functions F h j as holomorphic functions through S j . Obviously, the graph of F h j on U j coincides with the closure of its graph on U j \ S j ; therefore, the collection of the graphs of the A finite dimensional analytic variety in H is contained in a finite dimensional complex manifold and it is a complex variety in the latter. Therefore, we can repeat almost verbatim the argument used in [4] to show that the singularities of A U are a discrete set. The chain T we constructed is unique in Ω; this follows from the fact that there is no holomorphic cycle of H of positive dimension contained in Ω. Suppose X is a complex analytic subset of finite dimension in H, which is contained in Ω; if we consider the linear functional λ defined at the beginning of this section, obviously the set {|λ| ≤ δ} doesn't intersect X for δ > 0 small enough, therefore the function |λ| attains a positive minimum on X, but then the holomorphic function λ has to be constant on X. Therefore X lies in {λ = c} ∩ Ω which is bounded. By [2], X has to be of dimension 0, but this is impossible. ✷ We remark that the previous proof works also in the finite-dimensional case, giving a simplification of the argument used in [4], by employing the classical result by King, instead of its Hilbert space analogue. It is also worth noticing that we can to some extent relax the convexity property, asking only for Ω to be convex, strongly convex at one point and strongly pseudoconvex (or at least the Levi-form of bΩ to have at most m vanishing eigenvalues). In fact, strong convexity at one point and convexity everywhere ensure that the slices are compact, if the hyperplanes are parallel to the tangent at the point of strong convexity; we also need the strong pseudoconvexity assumption to guarantee that the Levi form of the boundary has positive eigenvalues, a fact which is implied by strong convexity but it isn't by mere convexity. Proof of Theorem 1.2. It is sufficient to show that if Conjecture 1.1 holds true, then hypothesis (iv) is always satisfied. We thus assume that Conjecture 1.1 holds true and we consider the metric space given by M with the distance d given by the restriction of the distance of H. Endowed with such distance, M is a locally compact space; as a manifold, M is second countable, therefore it is σ−compact. As the closure of { x − x 0 < ρ} in H is { x − x 0 ≤ ρ} for every x 0 ∈ H and ρ > 0, the same holds when the open and closed balls are intersected with M . Therefore, the metric d is Heine-Borel, i.e. bounded closed sets are compact. Now, let us take x ∈ M and ρ > 0; by the Heine-Borel property, the set {y ∈ M | d(x, y) ≤ ρ} = M ∩ {y ∈ H | x − y ≤ ρ} is compact, that is, M is quasi-locally compact, therefore we can apply Theorem 1.1 and obtain the desired result. It would be nice to get rid of the technical hypotheses (ii), (iii), (iv) or to find examples showing that the extension does not hold without them. Hypothesis (ii) (or its weaker version explained after the proof of Theorem 1.1) is needed to apply the cut, extend and paste method as we presented it (see example 4.1), but might not be necessary to extension, as another line of proof might be possible. Hypothesis (iii) is already present in the compact case treated in [15], and already in that case it would be nice to see whether it is a necessary request or not. It is worth noticing that an example showing extension does not hold just under hypotheses (i), (ii) and (iii) would be an indirect proof of the falseness of Williamson and Janos' conjecture. A possible direction for future research on the subject is that pursued in [5] in C n : given a (pseudo)convex domain Ω ⊂ H, and a subdomain A ⊂ bΩ is it possible to find a domain E ⊂ Ω depending only on Ω and A such that every maximally complex manifold (of real dimension at least 5, satisfying some technical conditions) M ⊂ A is the intersection of the boundary of a complex variety W ⊂ E with A? Thanks to what we proved in this paper, if Ω is a strongly convex domain, and A = bΩ, then E = Ω. Thus the question we are asking is indeed a generalization of the main result of this paper. Another quite natural question is whether it is possible to extend this result (or the one contained in [15]) to Banach spaces.
5,647.6
2012-11-20T00:00:00.000
[ "Mathematics" ]
Morphology of Meteorite Surfaces Ablated by High-Power Lasers: Review and Applications : Under controlled laboratory conditions, lasers represent a source of energy with well-defined parameters suitable for mimicking phenomena such as ablation, disintegration, and plasma formation processes that take place during the hypervelocity atmospheric entry of meteoroids. Furthermore, lasers have also been proposed for employment in future space exploration and planetary defense in a wide range of potential applications. This highlights the importance of an experimental investigation of lasers’ interaction with real samples of interplanetary matter: meteorite specimens. We summarize the results of numerous meteorite laser ablation experiments performed by several laser sources—a femtosecond Ti:Sapphire laser, the multislab ceramic Yb:YAG Bivoj laser, and the iodine laser known as PALS (Prague Asterix Laser System). The differences in the ablation spots’ morphology and their dependence on the laser parameters are examined via optical microscopy, scanning electron microscopy, and profilometry in the context of the meteorite properties and the physical characteristics of laser-induced plasma. Laser interaction experiments with meteorites offer a unique opportunity to explore the spectra and the properties of laser-induced plasma (LIP), suitable for the simulation of the physics and chemistry of meteor plasma. The classical application of such studies performed with tabletop (up to J-class lasers) as well as high-power (up to kJ-class lasers) laser sources involves the exploration of hypervelocity atmospheric entry, the effects of asteroid impact, space weathering, and bulk elemental analysis. This idea was originally proposed by William J. Rae and Abe Hertzberg from the US Cornell Aeronautical Laboratory in in 1964 [21]. In the early 1970s, Hapke et al. [22] employed laser ablation in the first simulation of impact evaporation, and Pirri et al. [23] pioneered the first fundamental description of laser light interaction with targets. Subsequent studies explored the formation of ions and dust particles [24], impact physics [25], hypervelocity damage on spacecraft materials [26], impact melting and the recrystallization of asteroid surfaces [27], space weathering by micrometeorite impacts [28][29][30][31][32][33][34], the weathering of the lunar surface [35,36], impact ejecta redeposition [37], impact shock wave propagation in materials [38], distribution, mineralogy, and the composition of ejecta produced by the high-velocity collisions of planetary bodies [39,40], the surface structures and reflectance properties of regoliths produced by space weathering [41], the crater-like structures formed after the laser shot [42], or the transformation of carbonates in terrestrial impact craters [43]. The chemical consequences of an impact event were explored in 1989 by a group led by William J. Borucki and Christopher P. McKay [44]. Using laboratory lasers with energy up to 1 J, they estimated the yields of molecules formed by impact-induced atmosphere transformation [45]. Managadze et al. [46] explored the synthesis of more complex organic substances, Nna-Mvondo et al. [47,48] demonstrated impact-induced chemical synthesis on icy bodies, and Navarro-Gonzáles et al. [49] explored the impact-assisted synthesis of nitrates on early Mars. Our current study focuses on the morphology of meteorite ablated surfaces. However, the same kJ-TW-class high-power PALS laser [50] was employed for the first time in research focused on asteroid impact chemical consequences almost 20 years ago. Simulations by PALS demonstrated the impact synthesis of amino acids [51], canonical nucleobases [13,14,[52][53][54], and sugars [55]. They also helped to explore the transformation of simple molecules occurring on early terrestrial planets [56], such as formamide [57], isocyanic acid [58], hydrogen cyanide [15], acetylene [59], methane [60], or carbon monoxide [56,61]. In addition to laboratory astrophysics, lasers are or will be applied in many space technologies (reviewed in [62] and references therein) for the in situ exploration and prospection of asteroids, comets, space debris mitigation, propelling space vehicles, or for the deflection of potentially dangerous near-Earth objects [63][64][65][66][67]. For their development, laboratory interaction experiments are crucial because the ablation process is significantly influenced by physical/chemical matrix effects. The ablation behavior of minerals common on asteroids has already been the subject of several scientific studies [68][69][70]. However, the behavior of the individual components does not have to reflect the behavior of the whole complex sample. These factors complicate, mainly, the scaling of parameters that are crucial for particular technological designs. Such a complex matrices are also well represented by meteorites. Therefore, interaction experiments with real meteorite specimens that directly represent interplanetary matter can be particularly valuable for evaluating the potential of these possible future high-power laser applications. In this paper, we aim to provide a systematic comparison of the meteorite surface interaction with three selected high-power lasers: (1) a Ti:Sapphire femtosecond laser, (2) the Bivoj laser of the infrastructure HiLASE, and (3) the Prague Asterix Laser System (PALS). Our study is aimed mainly at the ablation morphology with respect to the properties of meteorites, and at the physical characteristics of LIP. Our results appeal to, but are not limited to, the advantages and possible applications of comparable experiments for mimicking meteor spectra, interplanetary matter weathering, micrometeorite impact, or asteroid collisions. Moreover, we discuss the extrapolation of experimental parameters to those suitable for the application of lasers in future space technologies. Materials and Methods Interaction experiments were conducted for the first time with unique high-power laser systems: a Ti:Sapphire laser with a power of 0.02 TW, the iodine high-power PALS laser with a power of 1.7 TW, and the cryogenically cooled multislab ceramic Yb:YAG Bivoj laser with a power of 0.01 TW. Key experiments were focused on a basic comparison of the physical interaction between the iodine PALS laser, providing relatively long subnanosecond laser pulses (350 ps), and the Ti:Sapphire laser, with very short femtosecond pulses (50 fs), and meteorite samples in vacuum at a pressure of 10 −2 -10 −3 mbar. In the case of these two laser sources, emission spectroscopy was employed in order to provide additionally a physical characterization of laser-induced plasma (LIP). Additionally, we compared the physical interaction between the PALS and Ti:Sapphire laser and a third high-power Bivoj laser. Two interaction experiment options were explored for the Bivoj laser-at ambient air pressure and with continuous water flow over the sample surface to simulate the enhanced stress factors occurring during the hypervelocity atmospheric entry. In addition to a spectral characterization of the plasma and ablation spot mapping via scanning electron microscopy, optical microscopy, and 3D profilometry, we also employed a high-speed camera for recording the laser plume expansion on the PALS infrastructure. The interaction experiments were performed on meteorite specimens summarized in Table 1. Experimental Setup of Interaction Experiments To accomplish the subsequent analysis of laser spots, the polished sides of meteorite samples were ablated. The set parameters of all three lasers, employed together with laser-induced plasma temperatures and electron densities obtained via optical emission spectroscopy, are given in Table 2. The uncertainty of spectroscopy-based data was estimated to be <3%. High-Power Terawatt-Class Iodine Asterix Laser The core of the Prague Asterix Laser System (PALS) is an iodine gas laser capable of delivering energy of up to 650 J in a single shot (pulse duration 350 ps, wavelength 1315 nm) [50]. The repetition rate of the PALS laser is ∼30 min. During the PALS experimental campaign, meteorite specimens were placed in the vacuum chamber and slightly out of the focus of the laser beam provided by a plano-convex CaF 2 lens ( f /2; f = 60 cm). Depending on their specific positions, the laser spot ranged from 1 to 10 mm in diameter on the polished sample surface. The energy of the laser pulses ranged from 120 to 650 J. For further ablation plasma diagnostics, the radiation emitted by the plasma plume was collected by a collimator directly connected to a high-resolution Echelle spectrograph (ESA 4000, LLA Instruments GmbH, Berlin, Germany), and positioned in the direction of the laser spot at a 20 cm distance from the ablated specimen. A schematic drawing of the experimental instrumentation is depicted in Figure 1. During the experiments with the PALS laser, the longitudinal expansion velocity of the plasma plume was also measured using a high-speed camera. Ti:Sapphire Laser The solid-state Ti:Sapphire laser (Ti:Al 2 O 3 laser), generating ultra-short pulses in the range from 650 to 1100 nm, was set to the wavelength of 810 nm with a pulse duration of 50 fs, energy of 1 mJ, and repetition rate of 1 kHz. The laser beam was focused using a coated sapphire lens ( f /100, f = 1 m). In the case of the Ti:Sapphire laser, the meteorite specimens were placed in the vacuum chamber and attached to a linear stage. The speed of the linear shift was 200 µm/s. A high-resolution spectrograph for plasma diagnostics was also employed during these experiments. The experimental ablation setup for the Ti:Sapphire laser is similar to the one used for the PALS (Figure 1). Bivoj Laser The diode-pumped solid-state laser (DPSSL) called "Bivoj", with a laser wavelength of 1030 nm, is capable of delivering 10 ns pulses with an energy of more than 100 J at a 10 Hz repetition rate, and is classified as the most powerful laser in its class. For the interaction experiments, the energy output was set to ∼5 J, the repetition rate to 1 Hz, and the laser beam was focused into a square spot with an edge length of 3.1 mm to reach an intensity of about 5 GW/cm 2 and, thus, to achieve the desired ablation. First of all, the meteorite samples were ablated in air by 10 laser pulses, each with an energy of 5 J, accumulated in one spot. Second, the parameters were kept and the surface of the sample was exposed to a continuously flowing water stream. The plasma formed under water increased the pressure to 1-5 GPa. Considering the meteorite fragility, only one shot was focused on the meteorite specimen's surface. In this particular case, emission spectra were not recorded and the typical laser plasma temperature estimated from previous experiments was adopted for comparison with the other lasers used in this study, as summarized in Table 2. Scanning Electron Microscopy For the visualization of the ablation spots, a JEOL 6380LV electron microscope equipped with an Oxford Instruments EDS chemical analysis system was employed. To reveal both chemical and topographical information, the backscattered topography-composition imaging mode was used. All measurements were performed in a low-vacuum mode (i.e., ∼30 Pa) with an electron beam size of about 1 µm. The acceleration voltage was 20 kV, and the beam current was kept at the range of a few nA. Wide-Area 3D Measurement System The ablation spots on the surface of meteorites were also investigated with the VR-5000 microscope profilometer (KEYENCE Int.). The information about the depth, size, and shape of a particular relief was obtained by scanning the sample surface with structured light beams, and was based on the deformation of light bands and triangulation of the shadows detected on the surface structures. Physical Characterization of LIP and Plasma Longitudinal Expansion In order to investigate the physical parameters of laser-induced plasma, such as electron density and excitation temperature, emission spectra were recorded during interaction experiments with the PALS and Ti:Sapphire lasers, and subsequently examined through a numerical fitting process, described in more detail in the following section. For the PALS laser, the plasma plume longitudinal expansion velocity was also investigated using a high-speed camera. Plasma Temperature and Electron Density Plasma temperature and electron density were calculated using an iterative fitting procedure designed for the analysis of high-resolved UV-ViS LIBS spectra. The emission line intensity profile functions intended for the optimization are summarized in our previous paper [16]. Both electron and heavy particle number densities, together with plasma temperature, were fitted onto the function I ij = I ij (N S , N e , T) of a particular transition intensity I ij governed by individual species abundances N S . Each particular line was modeled by a pseudo-Voigt profile function with a Lorentzian line width directly proportional to the total number density of free electrons. Moreover, we assumed the Boltzmann energy distribution to hold in local thermodynamic equilibrium (LTE) plasma and exploited the particular distribution functions in their nonlinearized form. A fully synthetic spectrum was consequently depicted as a sum of individual line profile functions and fundamental plasma physics parameters (T, N S , N e ), optimized onto a fitted experimental record. The physical justification for such a model is dealt with in detail elsewhere; see [16,18] and references therein. The nonlinear optimization procedures were adopted to prevent possible spectral data corruption, which may arise in such warm, dense ablation plasma governed by nontrivial charge transfer phenomena. Plasma Expansion The laser-induced plasma behavior is strongly influenced by several parameters-the characteristics of the ambient medium (pressure, chemical composition), the physical and chemical properties of the target, and the laser source features (energy density, wavelength, pulse duration). For the plasma formation, a breakdown threshold must be reached by focusing the laser beam with a lens into gas or onto the surface of a solid or liquid. The absorption of the laser pulse energy and the resulting ablation of solid samples are achieved through several interaction processes. These initial processes differ depending on the pulse duration and the material properties, as described, for example, in [71]. At the focus spot, rapid material heating and the release of electrons, ions, atoms, molecules, and dust particles from the sample surface is induced. The plasma plume created above the sample surface and containing the ablated matter expands with supersonic velocity, which results in the propagation of a shock wave compressing, heating, and further ionizing the ambient gas in all directions. After the termination of the laser pulse, the plasma gradually dies away through recombination and the diffusion of electrons and ions. The expansion velocity of plasma is a rapidly changing parameter over time. It depends on several experimental parameters, such as the laser characteristics (energy, pulse duration), focus spot size, ambient pressure, and target material. Experimental observations ( [72,73], and references therein) have shown that for ultra-short laser pulses (fs-ps), an early-stage plasma with longitudinal extent is formed ahead of the material vapor plume and above the sample surface. This plasma is characterized by a high electron number density, which originates from the ambient air breakdown assisted by electrons emitted from the target. During the duration of the laser pulse, the propagation of the plasma ionization front can reach velocities of up to 10 9 cm/s, but decreases rapidly after the pulse termination. The later-appearing hemispherical vapor plume consisting of the ablated matter expands with velocities of lower than several orders. The presence of an ambient gas plays a significant role in the expansion process of the laser-induced plasma. Compared to plasma in vacuum, plasma in an ambient gas is spatially contracted, thanks to the confinement effects, which results in a smaller size of the plume and a higher density and collision frequency between ablated and ambient gas species, and the expansion is slowed down [71]. Measurement of Laser Plume Longitudinal Expansion Velocity A streak camera was used for the laser plasma expansion velocity measurement. The experimental set up is depicted in Figure 2. The laser beam was focused by the lens 1 ( f = 0.6 m) on the target surface, thereby creating the ablation plasma. The plasma light coming out of the chamber through an optical window was collected by the lens 2 ( f = 0.5 m), and projected via an aluminum mirror on the entrance slit of the streak camera with magnification M = 1. There were two filters positioned in front of the camera-a neutral ND filter for the optimization of the light intensity level, and a cutoff filter for the laser beam radiation. The position and angle of the mirror were precisely set so that the central region of the plasma plume (yellow line in Figure 2) was projected on the entrance slit (50 µm) of the streak camera. The temporal evolution of the plasma radiation passing through the entrance slit along the z-axis (representing the distance from the target) is shown in Figure 3, panel A. The main direction of the plasma plume longitudinal expansion is depicted by a red vector in Figure 3, PALS Due to its high energy, the PALS laser, even with the sample being placed out of the beam focus, is capable of ablating a relatively large volume (mm 3 ) of the chondritic meteorite in one single shot of about 500 J, creating, thus, a crater with a diameter of roughly 10 mm and an average depth of about 100 µm. A sectional view of this ablation crater is shown in Figure 4, panel A, and pictures obtained via optical and electron microscopy and profilometry are depicted in Figure 5. The profilometry also indicates that the ablation craters became significantly deeper with the decreasing size of the beam focus (up to 250 µm for 1 mm focus and about 500 J for chondritic meteorites). The iron parts of the pallasite Seymchan evinced greater resistance during ablation, and the depth of the craters is, therefore, around 70 µm for a laser pulse with an energy of about 650 J and a diameter of 1 mm. These ablation craters also show very distinct hillock formations in the center (Figure 4, panel C). So far, the same phenomenon has not been recorded for meteorites, but it has been observed for zinc [74] or silicon [75]. The origin of this effect could be attributed the non-uniform energy deposition (see intensity profile in Figure 4, panel B), the elastic rebound of the lattice, melt flows, or the recoil pressure [75,76]. For the interaction experiments, the mean values of the ablation plasma excitation and ionization, and the thermodynamic temperature for PALS, were estimated through an examination of the emission spectra at 5100 K, 12,100 K, and 20,000 K, respectively. The electron density of the laser-induced plasma reached the order of 10 15 -10 16 cm −3 . These physical parameters are also summarized in Table 2. The longitudinal expansion velocities of the ablation plasma within 5 ns after the laser pulse were studied using a streak camera. The measured values are very strongly affected by the set parameters of the streak camera-the streak range, the sweep rate, and the time resolution, which must be set in accordance with the expected expansion velocity. The velocities obtained during the PALS laser experiments are listed in Table 3, along with the camera settings. The obtained results range from approximately 400 km/s to 1000 km/s. These values are in accordance with findings of other experimental observations [72,73], and represent the expansion of an early-stage plasma constituted predominantly by electrons. Given the fact that the experiments were conducted under vacuum (∼10 −2 mbar), the electrons originated primarily from the target material. By comparing the values across all samples, no clear dependence of the velocities on the laser fluence was discovered. Within individual samples, similar expansion velocities for comparable fluences can be found (e.g., NWA 12269 experiments nr. [49][50][51][52]. However, there are also relatively significant differences among the obtained values for individual meteorites. Nonetheless, considering the nonhomogenous structure of the meteorite samples and their non-ideal polished surfaces containing micro cracks and holes, such an outcome is to be expected, and further supports the assumption of the strong dependence of the velocity on the aforementioned experimental factors. For each ablated specimen, every shot was focused on a different place of the meteorite sample, which, given the grainy mineralogical structure of meteorites, resulted in differences in the overall chemical composition and surface characteristics of the ablated material. 2 Red lines mark a change in streak camera parameters. 3 The uncertainty of the results obtained with streak camera is <10%. Ti:Sapphire The Ti:Sapphire femtosecond laser created deep incisions with roughly Gaussian profiles ( Figure 6). The Gaussian profile is a de facto imprint of the intensity profile of the laser beam [78,79]. Pictures of the laser spots are depicted in Figure 7. The mean values of the depth and the width of the laser incisions for chondritic meteorites are about 60 µm and 90 µm, respectively. In the case of the Sikhote-Alin iron meteorite, the incisions are shallower (only about 20 µm). The mean excitation temperature for Ti:Sapphire was estimated at 9700 K. The value for ionization temperature was slightly lower, i.e., 9500 K, while the mean thermodynamic temperature was calculated at 23,000 K, similar to the PALS laser. The electron density reached the order of 10 16 -10 17 cm −3 . Bivoj During the irradiation in air (after 10 shots, each with an energy of 5 J), the beam of the Bivoj laser with a square profile was able to create shallow craters with a depth of roughly 10 µm (Figure 8, panel A and Figure 9, panel A-C). It can be expected that the depth of the craters would become more prominent (to a certain level) with a smaller focus of the laser beam, as was observed in the case of the PALS. An examination via optical microscopy (see panel A of Figure 9) also showed relatively large, singed areas around the laser spots. In the case of ablation under a continuous flow of water (Figure 8, panel B and Figure 9, panel D-F), the optical profilometry did not detect any altitude differences. This finding suggests that, despite the apparent melting damage observed by electron microscopy (Figure 10), no ablation occured due to the presence of the water barrier. Comparison of Physical Interaction The analysis of the ablation spots and their near surroundings by electron microscopy and optical profilometry proved the crucial importance of the laser parameters (e.g., fluence, pulse duration) and experimental conditions held during ablation for the interaction experiments. In the case of the femtosecond Ti:Sapphire laser, the pulse duration (50 fs) was too short for the thermal effects (e.g., melting) to take place to a larger extent. In the place of the beam focus, the solid meteorite sample was, therefore, transformed directly into gaseous plasma. Because of the absence of thermal effects, the femtosecond laser created craters with clearly defined edges and without any significant amount of molten and deposited material around. In the vicinity of the ablation spots from lasers with a longer pulse duration, where thermal effects are more significant, a considerable amount of molten meteoritic material was observed. In the case of the high-power PALS laser, relatively large droplets of ejected molten material could be found even at greater distances from the ablation crater, which was caused by the high energy of the laser pulse. A visual comparison of the ablation site surroundings recorded via scanning electron microscopy for the femtosecond Ti:Sapphire laser and the picosecond PALS laser is shown in Figure 11. The melting effect was most evident inside the craters created by the Bivoj laser under the flow of water (see Figure 10). The presence of water impeded the evaporation of the meteoritic material, and only melting processes occurred. In the case of the laser interaction with stony meteorite samples and the large olivine grains in the Seymchan pallasite, material chipping, dusting off, and crumbling were observed due to the mechanical stress caused by a laser pulse-induced shock. In the case of the Seymchan pallasite, no ablation spots with any signs of the olivine chipping were discovered. A visual comparison of the non-ablation effects of the PALS laser on the olivine phase and the Fe-Ni matrix is depicted in Figure 12, panel A. An example of the olivine grain damaged by the PALS laser is shown in Figure 12, panel B. In several cases, meteorite specimens with a thickness of about 1 mm and a diameter of about 4 cm did not withstand even the shock of a single PALS laser pulse, and were crushed into small pieces ranging in sizes from 1 cm to a few mm. Some meteorites were also broken because of the high pressure (1-5 GPa) acting on the samples while being irradiated under a continuous flow of water by the Bivoj laser. Evaporated Volume and Crater Depth In terms of material removed per single shot, the high-power PALS laser appears to be the most efficient source. It is capable of ablating up to ∼1.7 mm 3 of chondritic material in one pulse, with a 1 mm focus and an energy of 650 J, which corresponds to a fluence of about 83,000 J/cm 2 and an irradiance of 240 TW/cm 2 . For comparison, the Bivoj laser, reaching a fluence of 50 J/cm 2 and an irradiance of 5 GW/cm 2 , was able to remove approximately 0.08 mm 3 of chondritic material in 10 pulses. If we consider a linear increase [80,81] in the depth of the crater ablated by the Bivoj laser, approximately 200 pulses would be sufficient to reach the same ablated volume as for the PALS laser, but the total cumulative energy delivered per unit area would be eight-times lower (∼10,000 J/cm 2 ). This limited ablation efficiency, even at high fluences, for lasers with longer pulse durations (ps-ns) is primarily attributed to the plasma shielding, resulting in a reduction in the laser energy able to reach the surface [80,82,83]. An investigation of the evaporated volume dependence on the PALS laser's radiation fluence, summarized in Figure 13, showed its apparent dependence on the type of the ablated material. Fe-Ni alloy areas on the Seymchan pallasite surface evinced greater resistance against ablation. Moreover, melting dominanates over ablation itself. Chondritic material poor in iron (SaU 571-L chondrites) is ablated significantly easier and, therefore, shows a much steeper trend. This can be attributed to material properties such as reflectivity, strength, surface structure, morphology, and thermal conductivity. Furthermore, the bottom surface profilometry of the Ti:Sapphire ablation incisions (see Figure 14) indicates significant ablation non-uniformity. This rugged relief is due to the grainy inhomogeneous structure of meteorites and the different resistances of the individual mineralogical components comprising the meteorite sample. Laser Applications in Future Space Technologies Experiments focusing on the interaction of lasers with real samples of interplanetary matter are also of major importance for the future space applications of laser technologies. Their technological level has witnessed a series of innovations and improvements in recent years, and, therefore, novel potential applications of laser technologies are considered. First of all, high-power lasers have been suggested for the deflection of hazardous asteroids, comets, and other near-Earth objects (NEOs), and for de-orbiting space debris [84]. One of the most significant advantages of this technology compared to other approaches [85][86][87] is the remote control of the NEO or debris trajectory over a long period of time without requiring complicated landing maneuvers. The trajectory alteration is achieved via the irradiation of the asteroid or debris surface with a high-intensity light source. Within the focus area, the energy absorption causes the sublimation of the exposed solid material into a gas. The expanding ablation plume then provides a continuous low thrust, acting on the ablated object and pushing it gradually from its original trajectory. This thrust method is considered to be an analog of standard rocket propulsion [88]. The material ejected from the ablated surface could then be collected by a spacecraft flying through the ablation plume and, together with the in situ analysis of the plasma plume emission spectra, could help even further with the exploration of the composition, formation, and evolution of the bodies of the Solar System [89]. For such space technologies, it will be essential to correctly determine the laser parameters in order to find a balance between the technological requirements (e.g., power consumption, size, weight) and the conditions that need to be met for achieving efficient ablation. Our results show that even a high-power laser capable of reaching significant fluence, of the order of tens of thousands of J/cm 2 , is capable of ablating only a few cubic millimeters of chondritic meteorite material (SaU 571, see Figure 13) in a single shot. However, the same outcome can be achieved with a much less powerful laser by accumulating a relatively small number of pulses to a single spot, as discussed in the previous section. This result, therefore, indicates that a higher fluence does not necessarily result in more efficient ablation, and on an experimental level, it further supports that highrepetition or CW (continuous-wave) lasers are rightfully the focus of most laser deflection studies (e.g., [64,65,69,90]). It is also very crucial to understand the ablation behavior of the material that constitutes potentially dangerous interplanetary matter. Figures 13 and 14 clearly show that there are differences among individual types of meteorites, and that the ablation is nonhomogenous even across one sample. Moreover, due to the fragility of the meteorite samples, which was highlighted in Section 3.4, ablation is not the only process responsible for the material's removal. Spallation, spattering, chipping, and dusting off are of crucial importance and, therefore, they should not be neglected in laser ablation models [68]. Although the different resistances of minerals typical for interplanetary matter are known, there is a lack of studies directly focused on more complex samples, such as meteorites, that would provide information about the overall behavior of interplanetary matter during laser irradiation, and not just its individual components. Conclusions The article summarizes the results of our recent research on the interaction of laser radiation with meteorites and highlights the potential of this kind of experiment for studies focused on meteor plasma simulation, hypervelocity atmospheric entries, and future space applications of laser sources, such as deflecting NEOs and de-orbiting space debris. A series of interaction experiments was conducted for various meteorite samples and three laser sources, varying in their parameters-a femtosecond Ti:Sapphire laser and the high-power Bivoj and PALS lasers. The investigation showed that the PALS laser was capable of creating, in one single shot of about 500 J, a crater with a diameter of approximately 10 mm and an average depth of 100 µm in chondritic material. The Fe-Ni parts of the Seymchan pallasite proved to be very resistant to PALS laser ablation, resulting in shallower craters with depths of about 70 µm at an energy of 650 J, but with a 10-times narrower focus spot (of 1 mm) than that used for experiments with the chondrite meteorite. Very distinct hillock formations, visually similar to the shape of the central uplift in real impact craters, were also observed in the central region of the ablation sites. The average depth and width of the deep incisions created by the femtosecond Ti:Sapphire laser in chodnritic material were 60 µm and 90 µm, respectively. The depth was, again, smaller for the iron meteoritic material-roughly 20 µm. The Bivoj laser with a square beam profile (3.1 × 3.1 mm) barely ablated the surface of the samples. The depth of the craters was only about 10 µm in the case of the irradiation in air. Moreover, the Bivoj water spots showed no altitude differences, and only distinct melting damage was observed. The physical characteristics of the laser-induced plasma for the Ti:Sapphire laser and the PALS laser were determined using optical emission spectroscopy. The plasma electron density was comparable for both lasers and ranged from 10 15 to 10 17 cm −3 . The estimated mean values of excitation and thermodynamic temperature were slightly higher in the case of the Ti:Sapphire laser-9700 K and 23,000 K, respectively. For the PALS laser, these calculated temperatures were 5100 K and 20,000 K. On the other hand, the mean ionization temperature turned out to be higher for the PALS laser (12,100 K). The value for the Ti:Sapphire laser was calculated to reach 9500 K. For the experiments held on the PALS laser, a high-speed streak camera was employed to measure the plasma plume longitudinal expansion velocities within 5 ns after the laser pulse. The measured values ranged between approximately 400 km/s and 1000 km/s; however, no evident dependence on the laser fluence was recognized. The dependence of the evaporated volume on the laser fluence and the sample material was studied and discussed. Even high-power laser pulses with energies of several hundred joules were able to directly vaporize only a limited volume, in the range of a few cubic millimeters, of chondritic meteorite specimens, identified as the least resistant sample in our study. Laser-induced ablation also led to the significant fragmentation of this material. Future research should, therefore, address the limits of high-power laser space technologies in exploring interplanetary matter, as elucidated on the laboratory level in this study, and we should not forget how to deploy them in space without causing international security tensions [91]. We also greatly acknowledge the support of the regional collaboration with the Valašské Meziříčí observatory, provided by grant of reg. no. R200402101 "Development of ground-based segment for space missions". Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest.
7,625.8
2022-05-11T00:00:00.000
[ "Physics" ]
Towards a critique of indigenous African religion How to cite this article: Strijdom, J., 2011, ‘Towards a critique of indigenous African religion’, HTS Teologiese Studies/ Theological Studies 67(1), Art. #950, 4 pages. DOI: 10.4102/hts.v67i1.950 In this article, it is argued that a postcolonial critique of the colonial study of religion should not preclude a critique of indigenous African religion itself. The latter may be developed from a human rights perspective and a critique of exclusionary views of indigeneity. The argument is illustrated by means of specific case studies. From the 16th until the 18th century, European observers denied the existence of religion amongst indigenous 'savages', maintaining that their 'superstitions' contained nothing that were similar to the true religion of Christianity and thereby justified the claim that they 'had no human rights to life, land, livestock or control over their own labor' (Chidester 2000a:428).However, Western scholars in the 19th century of the newly established discipline of Religious Studies did come to acknowledge the existence of religion amongst indigenous peoples but still considered their religion inferior by arguing that it retained animist elements from humankind's earliest (or most 'primitive') stage of evolution.The academic study of religion by means of this classification served European empires in justifying their conquests as a so-called civilising mission. 'This legacy', Chidester (2000b:314, 315, 317) says elsewhere, 'lingers in our current academic enterprises in the study of religion' and must be acknowledged as we develop 'innovative, cutting-edge methods in the history of religion'.Only by coming to terms with this 'horrible history', he insists, would we be 'well positioned to engage critically and creatively with the possibility of new horrible histories that might be on the horizon'. Towards the end of his essay, Chidester (2000a) offers some remarks on the possibilities that postcolonial theory may open for the future study of religion.He notes that: in more recent developments within postcolonial theory ... attention has shifted away from the critique of European representations of 'others' to a recovery of the subjectivity and agency of the colonized.(Chidester 2000a:432-436) Chidester then maps two opposite positions from which the postcolonial study of religion may proceed, namely indigeneity and hybridity. At the one extreme are those who speak from an indigenist location.Their aim is to recover and promote pure, authentic pre-colonial roots which they claim have essentially remained the same 'since time immemorial', but were suppressed during the colonial encounter.At the other extreme are those who view culture from a postmodern position of hybridity.This analytical strategy takes historical change seriously and focuses on the diversity and mixture of religious traditions as well as on diaspora communities, which emerged because of the cultural encounters. Does Chidester raise any critical concerns about these opposing positions?Postcolonial researchers speaking from a position of hybridity should, he emphasises, be aware that not all negotiations are equal (as is abundantly clear from the colonial encounter), whereas indigenists who cultivate a romantic nostalgia for pure, pre-colonial roots will have to contend with historical change and the diversity and mixture of traditions. He nevertheless distinguishes a group of indigenists who use essentialism as a strategy to recover indigenous traditions that were suppressed by colonialism.Amongst these, he includes (Chidester 2000a:433): • Fanon as a post-romantic indigenist, who linked the recovery of a suppressed past with his present violent struggle against colonialism. • Hindutva which is a term used to describe movements advocating Hindu nationalism and the notion of what constitutes true 'Hinduness' to be recovered and considered as 'the only indigenous religion of India'.Magesa's thesis, in short, is that African religion 1 is an ethical religion that, through its myths and rituals, promotes life, by which is meant communal life that is structured hierarchically not only between the spirit and human realms, but also within the human realm itself (the ancestral spirits are above their living descendants, but so are the elders above the youth).Of the rites of passage, initiation serves to change the status of adolescent boys and girls from childhood to adulthood.During this ritual, they are typically secluded from their families and clans, taught about the traditional tasks expected of men and women and a physical mark is often made on their bodies to remind them of that crucial event in their lives.After the seclusion, they are reintegrated into their groups as adults, now ready to get married with the intent to procreate, that is, to perpetuate life. 1. Magesa (1997:24-27) argues for the term 'African religion' (rather than 'African religions') on the basis of commonalities amongst sub-Saharan groups (i.e. the belief in a Supreme Being, the veneration of ancestral spirits and the centrality of communal existence).Shorter (2010:567), on the other hand, prefers the plural 'African religions'.He points out that African theologians (e.g.Mbiti and Idowu) tend to 'believe in ... a "super-religion" which purports to belong to all Africans', whereas social anthropologists (e.g.Evans-Pritchard and Mary Douglas) and historians (e.g.Ogot, Kimambo, Ranger and Gray) advocate a more factual and restricted comparative analysis of related African groups.I will use the term 'indigenous African religion' here, which is used by Chidester (2000a) and appropriate to my reflection on indigeneity. Where male and/or female circumcision is practiced in Africa, Magesa (1997:96-99) explains that the deliberate infliction of intense pain is intended to instil courage amongst the youth, which is a prerequisite to continue the life of the group.In the case of clitoridectomy or 'the excision or enlargement of the labia' (p.96) mothers-to-be are thus prepared to bear courageously the pain of childbirth.The ritual, furthermore, binds the circumcised together as a united age-set or agegroup.'By mingling and sharing their blood by way of the initiation knife, or because they shed it at the same time', he says, 'they become truly brothers and sisters and must be ready to defend one another as brothers and sisters would do' (p.98). There is no ethical problematisation of the practice by Magesa. It is simply presented as an essential rite that promotes the 'life-force'.Advocates of universal human rights, however, beg to differ.Contrary to Magesa, they would argue that 'female genital mutilation', as they prefer to call the practice, violates the individual woman's right to bodily integrity.It is the right of every woman to decide not to have her body altered in this brutal way.Instead of promoting life, the practice often causes lifelong health problems to women; it should therefore be exposed and criticised as an unjust tradition that needs to be eradicated or modified to cause less potential damage to the quality of women's lives (cf.Salmon 1997).On controversial issues like this, it is argued that researchers have the obligation to take sides (cf.Welsch & Endicott 2003). Martha Nussbaum (1999) has addressed the dilemma within a broader context.She opposes those who hold that human rights are a Western construct, which should not be imposed on other cultures, by arguing that such views are often expressed by the dominant voices within a culture that suppress alternative voices.All cultures are heterogeneous and changing.The argument for patriarchal dominationoften based on religion -was, it must be emphasised, not uncommon a generation or two ago in Western countries, but this asymmetrical relation has been and is being questioned and transformed.The same rational argumentation for conditions that would enable women to pursue a fulfilling life should be applied throughout all nations. It is therefore clear to Nussbaum that the right to liberty of religion, as enshrined in liberal constitutions, should be limited in cases where religions threaten the well-being of individual women.Where religions are guilty of such practices they should, she proposes as solution, be gently encouraged by liberal states and international human rights organisations (e.g.NGOs [Non-governmental organisations] like Human Rights Watch and Amnesty International) to change their discriminatory ways so that they will eventually conform to the human rights enshrined in their constitutions.The education of reflective, democratic citizens in public schools who engage critically with their traditions rather than submit in blind admiration to them may greatly help towards this end (Nussbaum 1999:116). Witchcraft accusations may be taken as a second instance of a conflict between human rights and indigenous African religion.Magesa (1997) describes witchcraft as the main enemy of the life force.In African religion evil that befalls an individual or community is generally explained as caused by a witch, usually an old, unsociable and eccentric woman who is said to have inherited the power to manipulate inherent sinister forces in order to harm her enemies.In order to enforce conformity to traditional communal values and ensure its moral well-being, the afflicted community has no choice but to identify such evil with the help of a diviner and to eliminate it from their midst.According to Magesa (1997): ... witchcraft is intolerable for any society that values ethical principles and life itself ... we can clearly see the role of witchcraft as sanction against immoral behaviour. If and when a person is convicted of witchcraft, the consequences are invariably grave.The Lamba of Zambia spear a witch to death.The Akamba of Kenya execute proven witches by arrows.Some African communities kill witches by beating or strangling them to death, or by burning them alive.Another form of punishment is banishment from the community, which, in the African conception of human life, is the equivalent of death ... (Magesa 1997:171-172) In her introduction to a collection of articles on witchcraft beliefs and accusations 2 in contemporary Africa (mostly written by African scholars), Gerrie Ter Haar (2007) argues that the victimisation of alleged witches is a human rights issue.The banning or killing of old women and recently of children accused of witchcraft, violates the individual's most basic right to life.She emphasises, however, that the introduction of legislation to suppress witch-hunts would not suffice, unless communities are involved at grassroots level.National legislation that defends and protects the rights of those victimised as witches should certainly not be scrapped.However, education that starts with the traditional worldviews of Africans is imperative; so too is the intervention of NGOs, including religious organisations, which act on behalf of alleged witches.She rightly cautions, though, against Pentecostal Churches that may 'strengthen fears of witchcraft rather than [help] to reduce them' (Ter Haar 2007:26). My first point then is that Chidester's moral critique of the colonial study of religion should not prevent us from developing a critique of indigenous African religion itself. The discourse of human rights may serve as a 'normative strategy' to engage in this important task (cf.also Hackett 2003:183).We must, of course, remember that the concept of human rights itself should not be essentialised, but that it has a history and is therefore open to continuous 2.Ter Haar (2007) and Ellis (2007) distinguish between beliefs in witchcraft and accusations of witchcraft.They argue that the first need not necessarily lead to witch-hunts (as is clear from the history of witchcraft in early modern and modern Europe), but also concede that a link between the two should in the end not be understated.They, furthermore, distance themselves from the 19th century colonial pejorative application of the term 'witchcraft' to the whole of indigenous African religion (i.e.not only to 'witches' as defined earlier, but also to diviners, healers etc.), but do emphasise that this colonial superimposition transformed the traditionally 'neutral', or rather 'ambivalent', spirit realm (it could be used to do good or harm) into one where evil became much more pronounced.They define religion in sub-Saharan Africa as 'a belief in the existence of an invisible world, distinct but not separate from the visible one, that is home to spiritual beings with effective powers over the material world' (Ellis & Ter Haar 2007:387; cf. also Ellis & Ter Haar 2004:14). renegotiation.Although we know it best from the United Nations' (UN) Declaration of Universal Human Rights in 1948, the first generation rights of basic freedoms emerged from the 18th century French and American revolutions, with the second generation of social-economic rights resulting from the 19th century exploitation of industrial workers.The third generation of cultural rights has become prominent only in the past 50 years since postcolonial states have gained independence.Some of these rights are indeed contested and in tension with each other, for example, at what point should freedoms be limited, or would respect for cultural and religious groups inhibit free critical discourse?Thinkers like Amartya Sen and Martha Nussbaum argue that the concept of human rights may have become too vague and therefore propose a list of capabilities 3 that clarifies the conditions that a state should create to make it possible for all its citizens to live a worthy life.However, this list too remains open for continuous negotiation between cultures and leaves room for culturally specific applications.Other more radical thinkers like Etienne Balibar emphasise that although states have the responsibility to see to the implementation of human rights for their citizens, it is the continuous 'revolt' of activists who resist oppression and exploitation that may help towards this end. 4 I now return to the dangerous potential of indigenist discourses and practices, when they become exclusionary by drawing the circles of identity ever narrower and rigidly.I will use Peter Geschiere's (2009) The perils of belonging to show how an historically and contextually specific analysis of funerary rituals may help towards a critique of indigenist approaches to culture and religion.Magesa (1997) considers communality central to the morality of African Religion, contrasting it with Western individualism.This 'guiding principle of African people's ethical behaviour' is well captured, he says, by the term Obuntu (elsewhere also Ubuntu) as the 'quintessence of authentic humanity' and appropriately summarised in the phrase: 'I am, because we are; and since we are, therefore I am' (Magesa 1997:66-67). But this noble ideal becomes problematic once we begin to study African group formations historically and in specific 3.For one version of her open-ended list of capabilities, see Nussbaum (2007:76-78). Here she discusses the following ten 'central human capabilities' that citizens of a just state should be entitled to: life; bodily health; bodily integrity; senses, imagination and thought; emotions; practical reason; affiliation; other species; play; control over one's environment (political and material).'A life worthy of human dignity', she argues, would require each and every one of these capabilities.She believes it would be possible to 'gather broad cross-cultural agreement [on these central capabilities], similar to the international agreements that have been reached concerning basic human rights'.She continues: 'Indeed, the capabilities approach is, in my view, one species of a human rights approach, and human rights have often been linked in a similar way to the idea of human dignity'. 4.Cf.Menke (2007) for a discussion of Arendt's critique of the idea of human rights.Arendt (1949Arendt ( , 1955) ) argued, firstly, that human rights are not inalienable, but are always negotiated between human beings and secondly that the concept would be 'nonsense on stilts' if one were not to belong as citizens to a state that could see to their implementation.Balibar (2007) highlights the importance of continuous dissidence and activism as a prerequisite for the realisation of human rights in Arendt's thought.Recently Habermas (2010) argued that the moral ideal of equal human dignity underlies all human rights, which are continuously and in different contexts to be worked out and specified in the making of laws and to be implemented by political systems.He insists: 'Die Spannung zwischen Idee und Wirklichkeit, die mit der Positivierung der Menschenrechte in die Wirklichkeit selbst einbricht, konfrontiert uns heute mit der Herausforderung, realistisch zu denken und zu handeln, ohne den utopischen Impuls zu verraten.Diese Ambivalenz führt nur zu leicht in Versuchung, sich entweder idealistisch, aber unverbindlich auf die Seite der überschießenden moralischen Gehalte zu schlagen oder die zynische Pose des sogenannten 'Realisten' einzunehmen'. locations.Who, we will then need to ask, is included and who is excluded under 'we' in a specific place and time and how do myths and rituals mediate, confirm or challenge such identities? Geschiere ( 2009) offers a description of the changes that he has observed over 30 years in funerary rituals in postcolonial Cameroon and interprets their changing form and function within the context of its changing politics.Under Ahmadou Ahidjo's one-party dictatorship in the 1960s and 1970s, citizenship belonged to those who resided within the national borders.All citizens were obliged to stand united behind the leader and to enact their loyalty by means of stiff, compulsory rituals imposed from above.Cultivating local support was strictly forbidden.Traitors were to be identified, renounced and punished.When Paul Biya took over in the early 1980s, he continued the suppression of dissidents, but was forced -due to international political and economic pressure -by the early 1990s to introduce multiparty elections.One of the strategies that Biya used to stay in power was to divide the opposition by actively promoting adherence to one's village of origin and ethnic group.It is within this context that 'at home' funerals started to take on a new significance, becoming the ultimate test of local belonging. It is not that traditional funerary rituals were not important before the 1990s.Geschiere (1990:190-196) offers a vivid description of their performance that he witnessed in the early 1970s amongst Maka villagers in the southern rainforest of Cameroon.These emotionally intense rituals enacted and mediated kinship roles, between mourning patrilineal descendants on the one hand and joyfully dancing but aggressive in-laws on the other hand. 5What happened in the 1990s was that these 'more private rituals increasingly invaded the public sphere, relegating the rituals of nationbuilding to a more or less secondary role' (Geschiere 2009:190).Being buried and attending funerals in one's ancestral village as proof of where one 'really' belonged now came to be used by politicians to cultivate local voters' support along kinship and ethnic lines -contrary to Ahidjo's earlier policy.The consequence of this new autochthonous policy was a continuous fragmentation of collectivities: who was considered to belong 'authentically' and who was to be excluded?Who were autochthons and who were allochthons or strangers within a specific territory? Geschiere's critique of autochthonous discourse and policies is not limited to postcolonial Cameroon alone.He extends his analysis to similar cases elsewhere in Africa, for example, in the Ivory Coast with Gbagbo's disastrous Opération Nationale d'Identification, in the Eastern Democratic Republic of Congo (DRC) and in South Africa's recent xenophobic outbursts.Most instructively, he includes the indigenist turn in Europe as well, particularly in the Netherlands and in Flanders, but also in France and ancient Athens as the cradle of the discourse. 5. Geschiere (2009:192-196) notes that the Maka proverbially refer to marriage as 'war' -exogamous marriage by definition means an encounter with another, potentially hostile group.During the funeral, the in-laws typically insult the deceased and claim victory by singing songs like 'now the vengeance is mine'.The purpose of the ritual is to act out these ambiguities (or 'precarious mixture of aggression and solidarity') and make them liveable.The funeral, he says, 'thus becomes a dramatic acting out of the map of kinship and affinity that links persons and groups'. In all these cases exclusionary, monolithic versions of history and culture are constructed resulting in fragmentation, intolerance and often violence.The task of historical study is to uncover the complex layers of history and to highlight the diversity of voices within any group.This is no trivial relativistic exercise, but a moral duty as explained earlier, to which (I believe) the historical and comparative study of religion may continue to contribute in the search for the social and political well-being of people -not least for people on the continent of Africa. • African movements that reject 'colonial constructions of African mentality' and instead promote 'visions of African humanity and personality, communalism and socialism, in the interests of a postcolonial African renaissance'.
4,533.2
2011-04-11T00:00:00.000
[ "History", "Philosophy" ]
Deoxycholic Acid Triggers NLRP3 Inflammasome Activation and Aggravates DSS-Induced Colitis in Mice A westernized high-fat diet (HFD) is associated with the development of inflammatory bowel disease (IBD). High-level fecal deoxycholic acid (DCA) caused by HFD contributes to the colonic inflammatory injury of IBD; however, the mechanism concerning the initiation of inflammatory response by DCA remains unclear. In this study, we sought to investigate the role and mechanism of DCA in the induction of inflammation via promoting NLRP3 inflammasome activation. Here, we, for the first time, showed that DCA dose-dependently induced NLRP3 inflammasome activation and highly pro-inflammatory cytokine-IL-1β production in macrophages. Mechanistically, DCA-triggered NLRP3 inflammasome activation by promoting cathepsin B release at least partially through sphingosine-1-phosphate receptor 2. Colorectal instillation of DCA significantly increased mature IL-1β level in colonic tissue and exacerbated DSS-induced colitis, while in vivo blockage of NLRP3 inflammasome or macrophage depletion dramatically reduced the mature IL-1β production and ameliorated the aggravated inflammatory injury imposed by DCA. Thus, our findings show that high-level fecal DCA may serve as an endogenous danger signal to activate NLRP3 inflammasome and contribute to HFD-related colonic inflammation. NLRP3 inflammasome may represent a new potential therapeutical target for treatment of IBD. A westernized high-fat diet (HFD) is associated with the development of inflammatory bowel disease (IBD). High-level fecal deoxycholic acid (DCA) caused by HFD contributes to the colonic inflammatory injury of IBD; however, the mechanism concerning the initiation of inflammatory response by DCA remains unclear. In this study, we sought to investigate the role and mechanism of DCA in the induction of inflammation via promoting NLRP3 inflammasome activation. Here, we, for the first time, showed that DCA dosedependently induced NLRP3 inflammasome activation and highly pro-inflammatory cytokine-IL-1β production in macrophages. Mechanistically, DCA-triggered NLRP3 inflammasome activation by promoting cathepsin B release at least partially through sphingosine-1-phosphate receptor 2. Colorectal instillation of DCA significantly increased mature IL-1β level in colonic tissue and exacerbated DSS-induced colitis, while in vivo blockage of NLRP3 inflammasome or macrophage depletion dramatically reduced the mature IL-1β production and ameliorated the aggravated inflammatory injury imposed by DCA. Thus, our findings show that high-level fecal DCA may serve as an endogenous danger signal to activate NLRP3 inflammasome and contribute to HFD-related colonic inflammation. NLRP3 inflammasome may represent a new potential therapeutical target for treatment of IBD. Keywords: high-fat diet, bile acid, inflammation, inflammasome, il-1β, inflammatory bowel disease inTrODUcTiOn A westernized high-fat diet (HFD) is associated with the development of diverse inflammatory diseases, including inflammatory bowel disease (IBD). Epidemiological studies indicate that HFD consumption, as an important environmental factor, could increase the risk of both ulcerative colitis and Crohn's disease (1,2). Increasing evidence shows that prolonged exposure to the high level of fecal bile acids, which is caused by HFD, contributes to the occurrence of IBD and gastrointestinal cancer (3)(4)(5). Deoxycholic acid (DCA) makes up 58% of bile acid in human feces, and dietary fat is observed to mainly increase fecal secondary bile acids, especially DCA, which further increases the concentration of colonic DCA (6,7). Stenman and colleagues found that a diet high in fat increased the fecal concentration of DCA nearly 10-fold (7). Furthermore, high level of DCA, which is comparable to its concentration in feces of high-fat-fed mice could disrupt epithelial integrity and is related to barrier dysfunction (8,9). Meanwhile, transient colorectal instillation of DCA in rat leads to mild colonic inflammation, whereas longterm feeding of mice with a diet supplemented with DCA, which mimic the effect of a HFD, induces obvious colonic inflammation and injury that resembles human IBD (10,11). These findings support the potential role of excessive fetal DCA in mediating colonic inflammatory injury of IBD; however, the mechanism concerning the initiation of inflammatory response by DCA remains largely unclear. The innate immune system provides the first line to recognize microbes or endogenous molecules via pathogen-associated molecular patterns (PAMPs) or damage-associated molecular patterns (DAMPs) by host pattern recognition receptors (PRRs). Inflammasome is a major component of innate immunity, and recent studies have highlighted the critical role of NLRP3 inflammasome in the inflammatory response. NLRP3 inflammasome is a molecular platform that can be activated by multiple PAMPs or DAMPs and thus involved in diverse inflammatory diseases (12)(13)(14). Upon activation, NLRP3 recruits apoptosis-associated speck-like protein (ASC) and caspase-1 (interleukin-1 converting enzyme, ICE), leading to the maturation and secretion of highly pro-inflammatory cytokines, such as IL-1β (15). Unlike other cytokines, bioactive IL-1β production relies on inflammasome activation (16)(17)(18). More importantly, emerging evidences suggest the pivotal role of NLRP3 inflammasome in the development and pathogenesis of IBD (19). Single nucleotide polymorphisms of nlrp3 gene have been linked to the development of Crohn's disease (20). NLRP3 as well as caspase-1-deficient mice were protected from DSS-induced colitis (21,22). Consistently, clinical studies show increased IL-1β level in the serum and inflamed colonic tissues of IBD patients, and IL-1β levels are correlated well with the severity of intestinal inflammation and disease activity (23)(24)(25)(26). Furthermore, pharmacological inhibition of IL-1β or Caspase-1 was shown to successfully ameliorate intestinal inflammation in colitis animal models (27,28). Given the important role of the inflammasome in intestinal immunity, we hypothesized that NLRP3 inflammasome activation may be involved in the DCA-induced colonic inflammation. In this study, we provide evidence that DCA can activate NLRP3 inflammasome and induce obvious mature IL-1β production in macrophages by promoting cathepsin B release at least partially via S1PR2 receptors. Colorectal instillation of DCA in mice strongly aggravates DSS-induced colitis and caspase-1 inhibition as well as macrophage depletion substantially alleviates colonic inflammation and injury. Mice The 6-to 8-week-old C57BL/6 female mice were purchased from Experimental Animal Center of the Chinese Academy of Sciences (Shanghai, China) and housed in a specific pathogen-free (SPF) facility. The animal study protocols complied with the Guide for the Care and Use of Medical Laboratory Animals issued by the Ministry of Health of China and approved by the Shanghai Laboratory Animal Care and Use Committee. cells The murine macrophage cell line J774A.1 was obtained from Type Culture Collection of the Institutes of Biomedical Sciences, Fudan University (Shanghai, China). J774A.1 cells were cultivated in DMEM culture medium (Invitrogen) supplemented with 10% fetal bovine serum (Gibico) and 1% penicillin/ streptomycin (Invitrogen) at 37°C with 5% CO2. Bone marrowderived macrophages (BMDMs) were isolated and cultured as described elsewhere (29). Briefly, bone marrow cells were harvested from femurs and tibiae of C57BL/6 mice. Cells were then cultured in DMEM supplemented with 10% FBS and 30% L929 cell-conditioned medium (as a source of M-CSF) for 6-7 days. Adherent cells were used in the following experiments. In Vitro Dca Treatment J774A.1 cells or BMDMs were primed with 1 μg/ml LPS for 5 h before stimulation with DCA at different concentrations, then, supernatants (SNs) were harvested at indicated time points and the IL-1β level was determined by ELISA Kit (eBioscience) according to the manufacturer's instructions. For some experiments, various inhibitors (e.g., NAC, CA-074 Me) were added to the culture medium 30 min ahead of DCA treatment. lysosome and cathepsin B imaging Lipopolysaccharide-primed J774A.1 cells were incubated with or without DCA (100 μM, 24 h); then, the cells were stained with Lyso Tracker Green DND-26 (Invitrogen) or cathepsin B fluorogenic substrate z-Arg-Arg cresyl violet (Neuromics) for 1 h, followed by Hoechst staining for half an hour. Fluorogenic signals were captured by inverted fluorescence microscope (Leica). reactive Oxygen species Measurement Lipopolysaccharide-primed J774A.1 cells were treated with or without DCA (100 μM), and nigericin stimulation (20 μM) was regarded as positive control. ROS production was measured by using DCF-DA (Invitrogen) probes according to the manufacturer's instructions. Briefly, cells were incubated with DCF-DA (15 μM) for 1 h at 37°C after DCA stimulation. Fluorescence was visualized directly under a fluorescence microscope. icP-Oes assay Lipopolysaccharide-primed J774A.1 cells (1 × 10 7 ) were treated with or without DCA (100 μM, 24 h); then, the cells were lysed in ultra pure nitric acid before microwave digestion and then diluted to 5% HNO3. Intracellular K + was analyzed by using Perkin Elmer Optima 8000 ICP-OES Spectrometer. External K calibration was performed between 0 and 10 ppm. Western Blot J774A.1 cells were lysed by protein lysis buffer (Sigma) containing protease and phosphatase inhibitors (Theromo), and the cell culture supernatant was concentrated by acetone precipitation. Cell lysates (50 μg) or concentrated supernatant proteins were resolved by SDS-PAGE, transferred to PVDF membranes (0.2 μm), and probed with antibodies against IL-1β (Cell Signaling Technologies), Caspase-1 (Santa Cruz, CA, USA), NLRP3 (R&D), TGR5 (Abcam), and β-actin (sigma). For the detection of cytosolic cathepsin B, cells were thoroughly washed and permeabilized with extraction buffer containing 50 μg/ ml digitonin for 15 min at 4°C to lyse the plasma membrane without disturbing the intracellular membranes. These cell lysates were then subjected to SDS-PAGE and immunoblotted for cathepsin B (Santa Cruz, CA, USA). Reactive signals were detected by ECL Western Blotting Substrate (Thermo Fisher Scientific, Waltham, MA, USA) and ChemiDoc™ XRS + System (Bio-Rad). cytosolic cathepsin B activity assay Extraction of cytosolic protein was performed as described above and cathepsin B activity was determined by a fluorometric assay kit (ApexBio). Briefly, 50 μl of each sample (containing 100 μg of total protein) was incubated with cathepsin B reaction buffer (50 μl) and substrate Ac-RR-AFC (10mM, 2 μl) at 37°C for 1 h, and free amino-4-trifluoromethyl coumarin (AFC) was measured through a fluorescence spectrophotometer with an excitation wavelength of 400 nm and an emission wavelength of 505 nm. colitis induction and Treatment Acute colitis was induced in C57BL/6 mice with 2.5% DSS (MP Biomedicals) dissolved in drinking water given ad libitum for 7 days. DSS-treated animals were randomly divided into three groups and received an enema of PBS, 4mM DCA (in PBS, 0.1 ml), or 4mM DCA plus intraperitoneal injection of caspase-1 inhibitor (belnacasan, 50 mg/kg/day), respectively, for seven consecutive days from day 1 of DSS treatment (n = 7 in each group). Body weight was measured daily throughout the course of experiment. On day 8, mice were sacrificed and colon length was measured. The paraffin sections of colon tissues were stained with hematoxylin and eosin. A scoring system was applied to assess diarrhea and the presence of occult or overt blood in the stool. Colon homogenates were used for immunoblot analysis of mature IL-1β and assessment of MPO activity. In Vivo Macrophages Depletion To evaluate the role of macrophages in the colonic inflammation exacerbated by DCA, colitis was induced in C57BL/6 mice with 2.5% DSS for 7 days as described above. Macrophages depletion was performed by intraperitoneal injection of 0.2 ml clodronateliposomes (www.clodronateliposomes.com, Netherlands) 4 days prior to DSS treatment and on days 0, 2, 4, and 6 during DSS treatment as described elsewhere (30). Animals were randomly divided into five groups, including control group, DSS-treated group, DSS-macrophages depletion group, DSStreated plus DCA enema group and DSS-treated plus DCA enema-macrophages depletion group (n = 7 in each group). On day 8, mice were sacrificed for sample collection and analysis as mentioned above. histological analysis Colonic histological scoring was determined by inflammatory cell infiltration (0-3) and tissue damage (0-3) in a blinded manner. For tissue inflammation, increased numbers of inflammatory cells in the lamina propria were scored as 1, confluence of inflammatory cells extending into the submucosa as 2, and transmural extension of the infiltrate as 3. For tissue damage, discrete lymphoepithelial lesions were scored as 1, mucosal erosions were scored as 2, and extensive mucosal damage and/or extension into deeper structures of the bowel wall were scored as 3. The combined histological score ranged from 0 to 6. statistics All results were expressed as mean ± SEM. Statistical significance was assessed by two-tailed Student's t-test or one-way analysis of variance (ANOVA). Differences were considered statistically significant at p < 0.05. Dca induces caspase-1 activation and il-1β Maturation in Macrophages The NLRP3 inflammasome recognizes many endogenous materials as danger signals and triggers the release of strong pro-inflammatory cytokines including active IL-1β, thus contributing to diverse diseases as atherosclerosis, Alzheimer's disease, and T2 diabetes (31,32). Here, we hypothesized that DCA may also exert its inflammatory potential via activating inflammasomes. Therefore, LPS-primed murine macrophage cell line J774A.1 was treated with different dosage of DCA, and the result showed that IL-1β secretion was obviously induced in a dose-and time-dependent manner in response to DCA (Figures 1A,B). IL-1β maturation was further confirmed by the detection of cleaved IL-1β and active caspase-1 as assessed by western blotting (Figure 1C). Meanwhile, DCA also dose-dependently induced IL-1β maturation and secretion in murine bone marrow-derived macrophages (BMDMs) ( Figure 1D). Moreover, we treated macrophages with DCA after LPS challenge and observed that DCA stimulation did not significantly increase the pro-IL-1β, NLRP3, and ASC level (Figure 1C), and DCA alone had no obvious effect on pro-IL-1β and NLRP3 expression either (data not shown). These data demonstrate that DCA could induce IL-1β maturation and release in macrophages through promoting inflammasome activation. In order to investigate whether DCA can activate the NLRP3 inflammasome, which is usually involved in caspase-1 activation induced by multiple danger signals, we knocked down the expression of NLRP3 by transfecting macrophages with siRNA specific for Nlrp3. We found that LPS-primed J774A.1 macrophages transfected with control siRNA produced large amounts of IL-1β upon DCA exposure. In contrast, macrophages transfected with Nlrp3 siRNA produced much less IL-1β in response to DCA or nigericin (Figure 2B; Figure S1 in Supplementary Material), indicating the requirement of NLRP3 for IL-1β processing. Meanwhile, Nlrp3 siRNA transfection had no effect on IL-1β production induced by Salmonella or poly(dA:dT) ( Figure S1 in Supplementary Material). These results suggest that DCA at least partially activates the NLRP3 inflammasome, leading to the activation of caspase-1 and subsequent IL-1β maturation and release. Dca-induced nlrP3 inflammasome activation requires cathepsin B release Three major cellular events are regarded as common mechanisms for the NLRP3 inflammasome activation, including potassium (K + ) efflux, reactive oxygen species (ROS) formation, and cathepsin B leakage (33)(34)(35)(36). To identify the upstream events involved in DCA-induced inflammasome activation, we observed the effect of selective inhibitors on IL-1β secretion in DCA-treated macrophages. DCA-induced IL-1β secretion was not suppressed by the inhibition of ROS formation and K + efflux (Figures 3A,B), which was confirmed by the fact that DCA stimulation had no obvious effect on intracellular ROS formation and potassium level ( Figure S2 in Supplementary Material); however, it was dramatically blunted by CA-074Me, a cathepsin B inhibitor ( Figure 3C). Consistently, DCA treatment strongly decreased the staining of lysosomal cathepsin B ( Figure 3D) and increased cytosolic cathepsin B level as well as its activity (Figures 3E,F), which indicated the release of cathepsin B into cytoplasm. Unexpectedly, lysosomes retained their morphology in DCA-treated cells (Figure 3D), implying the involvement of other mechanisms responsible for the DCA-induced cathepsin B release. These data suggest that DCA induces NLRP3 inflammasome activation mainly through promoting cathepsin B release. Dca-induced cathepsin B release and il-1β Production are Mediated by sphingosine-1-Phosphate receptors 2 To better understand the molecular mechanism underlying the induction of NLRP3 inflammasome activation by DCA, the major bile acid nuclear receptor farnesoid X-receptor (FXR) and membrane receptor TGR5 were selectively inhibited (37). Unexpectedly, neither suppression of FXR nor knockdown of TGR5 expression had obvious effect on mature IL-1β production in response to DCA (Figures 4A,B; Figure S3 in Supplementary Material), which indicated the involvement of other bile acid receptors. Sphingosine-1-phosphate receptor 2 (S1PR 2), which belongs to G protein-coupled receptor (GPCR), is reported to be another kind of bile acid receptor that appears to play an important role in the regulation of hepatic lipid metabolism (38). Importantly, S1PR2 is found as the major S1PRs expressed in monocytes/macrophages and also has been required for mast cell degranulation and chemotaxis toward the site of inflammation (39), implying its important role in the immune regulation and inflammatory response. Here, we observed that S1PR2 antagonist JTE-013 could significantly inhibit IL-1β release upon DCA stimulation ( Figure 4C); intriguingly, JTE-013 could dramatically prevent cathepsin B release from lysosome ( Figures 4D-F), which is critical in the DCA-triggered NLRP3 inflammasome activation. These data strongly suggest that S1PR2 is a key mediator of NLRP3 inflammasome activation induced by DCA. Dca administration exacerbates Dssinduced colitis and caspase-1 inhibition exhibits significant Protective role To investigate the effect of DCA and its inductive role of inflammasome activation in the development of colitis, DSS-treated mice received an enema of 0, 4mM DCA, or 4mM DCA plus intraperitoneal injection of caspase-1 inhibitor belnacasan. The addition of 4mM DCA enema caused much more severe colitis than DSS treatment alone, as evidenced by significant decrease of body weight and shortening of colon length (Figures 5A,B), much higher haematochezia score, and MPO activity (Figures 5C,D). Of note, the mature IL-1β level in colon tissue is elevated dramatically in DCA enema group (Figure 5E), indicating the highly activation of inflammasome. Consistently, H&E staining of colonic tissue in DCA enema group showed significantly higher mucosal inflammatory cell infiltration and more severe epithelial layer destruction compared to that of the DSS alone group (Figure 5F). Indeed, inhibition of caspase-1 with belnacasan obviously reduced the mature IL-1β level in colon tissue and largely prevented the deteriorating role of DCA in the DSS-induced colitis (Figures 5A-F). These data provide further evidence that induction of inflammasome activation is a major pathogenic mechanism of DCA in the colonic inflammation. Macrophages Depletion abrogates the exacerbating role of Dca in the Dss-induced colitis Since IL-1β is primarily produced by activated macrophages, we sought to study whether colonic macrophages are the major effectors of DCA. Macrophage depletion was achieved by injection of clodronate-containing liposomes (CL), which can selectively cause macrophage apoptosis. Immunohistochemistry staining showed mucosal infiltration with large numbers of macrophages in DSS-treated, DCA enema mice, but not clodronate liposomesadministrated counterparts ( Figure S3 in Supplementary Material). Macrophage depletion almost abrogated the production of mature IL-1β in colon tissues induced by DCA enema (Figure 6E) and significantly improved the clinical parameters of DSS-treated, DCA enema mice, showing less body weight decease and colon length shortening (Figures 6A,B), much lower haematochezia score, and MPO activity (Figures 6C,D), as well as less mucosal inflammatory cell infiltration and less disruption of the mucosal epithelium ( Figure 6F). These data indicate that DCA exerts its pro-inflammatory effect mainly through colonic macrophages. DiscUssiOn High-fat diet affects bile acid metabolism, and dietary fat also changes fecal bile acid profile, characterized by substantial increase of DCA in feces, which may contribute to the pathogenesis of IBD; however, the detailed mechanisms still need to be explored. In this study, we, for the first time, proved that DCA could dose-dependently induce NLRP3 inflammasome activation and highly pro-inflammatory cytokine-IL-1β production in macrophages. Knockdown of NLRP3 expression and caspase-1 inhibition largely abrogated DCA-induced IL-1β secretion. In vivo experiments showed that colorectal instillation of DCA at the concentration comparable to HFD significantly exacerbated DSS-induced colitis, as evidenced by substantial decrease of body weight, increased histological colitis severity, and importantly, pronounced elevation of mature IL-1β level in colon tissue. Furthermore, blockage of NLRP3 inflammasome activation or macrophage depletion obviously reduced the mature IL-1β level and protected mice from the aggravated inflammation imposed by DCA. Together, our results provide a new mechanism that high level DCA caused by HFD contributes to colonic inflammation through activating NLRP3 inflammasome. NLRP3 inflammasome, as the most extensively studied inflammasome, can be activated by diverse stimuli, including bacterial toxins, small molecules, as well as various crystals, many of which are host-derived components, such as uric acid, fatty acid, and ATP; here, we demonstrated that secondary bile acid DCA can also activate NLRP3 inflammasome. Despite the structure and characteristic of different stimuli diverse, potassium (K + ) efflux, cathepsin B leakage from lysosomes, and ROS production are the three common mechanisms of NLRP3 inflammasome activation (33)(34)(35)(36). The data obtained in our study demonstrated that NLRP3 inflammasome activation by DCA requires cathepsin B leakage. Numerous studies have shown that cathepsin B leakage usually occurs in the crystal-induced inflammasome activation, such as silica and uric acid crystals (40,41), accompanied with lysosomal disruption. However, we found that phagocytic uptake was not required for DCA-induced IL-1β secretion (data not shown), and there was no evidence of lysosome damage either, thus indicating the involvement of other mechanisms responsible for the DCA-induced cathepsin B release. Further evidence showed that, instead of the major bile acid receptors (nuclear receptor FXR and membrane receptor TGR5), another kind of GPCR S1PR2 (also known as a bile acid receptor) participated in the DCA-induced IL-1β secretion. In line with our findings, S1PR2 was observed to modulate pro-inflammatory cytokine production (including IL-1β) in bone marrow cells and affect osteoclastogenesis, thus plays an essential role in inflammatory bone loss diseases (42). In addition, Skoura and colleagues proved that S1PR2 expressed in macrophages of atherosclerotic plaque regulates inflammatory cytokine secretion (IL-1β, IL-18) and promotes atherosclerosis (43). Our finding that S1PR2 antagonist dramatically prevented cathepsin B release induced by DCA confirms the role of S1PR2 in mediating NLRP3 inflammasome activation and IL-1β secretion. Therefore, different from the endocytosed crystals, DCA-induced cathepsin B release may be receptor-mediated, although the intermediate process needs further investigation. Blocking S1PR2 signaling might be a novel therapeutic strategy to the HFD-related inflammatory diseases such as atherosclerosis and IBD. A very recent report exhibited by Guo and colleagues (44) showed that opposite to our findings, bile acids conveyed an inhibitory effect on NLRP3 inflammasome activation. In this report, they demonstrated that bile acids inhibited NLRP3 inflammasome activation via the TGR5-cAMP-PKA axis, whereas our data showed that DCA triggered NLRP3 inflammasome activation through S1PR2-cathepsin B pathway, one possible explanation of the discrepancy is that bile acids may have dual regulatory effect on inflammasome activation by different mechanisms. The distinct modulation effect of bile acids on inflammasome activation may depend on diverse factors, such as bile acid concentration, receptors involved, and the presence or absence of other inflammasome activators. Guo et al. proved that bile acids at their physiological concentrations exhibited suppressive effect on inflammasome activation, while our findings suggested that high-level DCA served as a danger signal and contributed to inflammasome activation, which is paralleled with the situation that excessive DCA induced by HFD exacerbated colitis. In addition, the presence or absence of other inflammasome stimuli (such as nigericin, ATP) could be another crucial factor. Similar to the dual effect of bile acids, epinephrine and norepinephrine inhibited cytokine production (IL-1β, IL-6, and TNF-α) in the presence of pro-inflammatory stimuli, otherwise increased such cytokine production through β2 adrenergic receptor (β2AR) (45). Taken together, although the accurate regulation mechanisms of bile acids on inflammasome activation still need to be explored, the role of bile acids may be a double-edged sword and their in vivo concentrations should be tightly controlled. Our in vivo studies showed that DCA instillation at a high concentration, which comparable to the concentration on a HFD (7), significantly exacerbated DSS-induced colitis. These results are consistent with previous report that a DSS-induced chronic ulcerative colitis model showed mild inflammatory manifestations and became more aggravated when HFD was administered (46). In our model, DSS affects the epithelial barrier first, which allowing translocated bacteria endotoxin (e.g., LPS) to stimulate lamina propria macrophages and provide the first signal for the synthesis of pro-IL-1β, and subsequently excessive DCA triggers NLRP3 inflammasome activation and promotes bioactive IL-1β secretion, thus contributing to intestinal inflammatory response. These results offer a new explanation for the aggravated inflammatory injury in individuals consuming HFD, who have already suffered from intestinal barrier dysfunction, for example, high meat intake was reported to correlate with increased likelihood Immunoblot analysis of mature IL-1β (17 kD) in colonic homogenates (F) H&E staining and histological score of distal colon sections of differently treated mice as described above. *p < 0.05; **p < 0.01; ***p < 0.001 compared to the DSS-treated alone mice. # p < 0.05; ## p < 0.01; ### p < 0.001 compared to the DSS-treated plus DCA enema mice. Error bars indicate SEM. Representative data from three independent experiments are shown. of UC relapse (47). In another aspect, numerous in vitro and ex vivo studies show that high level DCA is toxic to epithelial cells and able to disturb intestinal permeability; meanwhile, mice fed with a diet supplemented with DCA for 10 weeks to mimic HFD exhibits impaired gut barrier function (9), and these findings indicate that long-term exposure to excessive DCA alone can cause intestinal barrier dysfunction and increase the passage of both LPS and DCA into gut mucosa, which may result in the initiation of inflammation at least partially via inflammasome activation. It is of certainty that there should be many other factors involved in HFD-induced colitis, including intestinal flora, which also affect bile acids metabolism by increasing secondary bile acids level (such as DCA). The contribution of the interaction between bile acids and intestinal bacteria to the development of HFD-induced colitis remains to be further investigated. Collectively, our findings disclose that excessive DCA serves as an endogenous danger signal to activate NLRP3 inflammasome in macrophages and exacerbates colonic inflammation. Additionally, increased fecal DCA has also been implicated in the promotion of colon tumorigenesis. DCA was observed to induce colonic crypt cells proliferation and related to colon cancer growth and progression (48)(49)(50). Since chronic uncontrolled inflammation in the intestine is closely related to the development of colitis-associated cancer (CAC), our findings here suggest another possible mechanism that DCA participates in the development of colon cancer. In summary, our study offers a plausible mechanistic basis by which HFD might increase the prevalence of colitis and suggests the change of lifestyle to prevent HFD-related inflammatory diseases like IBD. aUThOr cOnTriBUTiOns SZ, ZG, and JZ performed research and analyzed data. CT and YG performed research and discussed results. CX analyzed data and Deoxycholic Acid Activates Inflammasome
5,965.4
2016-11-28T00:00:00.000
[ "Biology", "Medicine" ]
The Extended Oxygen Window Concept for Programming Saturation Decompressions Using Air and Nitrox Saturation decompression is a physiological process of transition from one steady state, full saturation with inert gas at pressure, to another one: standard conditions at surface. It is defined by the borderline condition for time spent at a particular depth (pressure) and inert gas in the breathing mixture (nitrogen, helium). It is a delicate and long lasting process during which single milliliters of inert gas are eliminated every minute, and any disturbance can lead to the creation of gas bubbles leading to decompression sickness (DCS). Most operational procedures rely on experimentally found parameters describing a continuous slow decompression rate. In Poland, the system for programming of continuous decompression after saturation with compressed air and nitrox has been developed as based on the concept of the Extended Oxygen Window (EOW). EOW mainly depends on the physiology of the metabolic oxygen window—also called inherent unsaturation or partial pressure vacancy—but also on metabolism of carbon dioxide, the existence of water vapor, as well as tissue tension. Initially, ambient pressure can be reduced at a higher rate allowing the elimination of inert gas from faster compartments using the EOW concept, and maximum outflow of nitrogen. Then, keeping a driving force for long decompression not exceeding the EOW allows optimal elimination of nitrogen from the limiting compartment with half-time of 360 min. The model has been theoretically verified through its application for estimation of risk of decompression sickness in published systems of air and nitrox saturation decompressions, where DCS cases were observed. Clear dose-reaction relation exists, and this confirms that any supersaturation over the EOW creates a risk for DCS. Using the concept of the EOW, 76 man-decompressions were conducted after air and nitrox saturations in depth range between 18 and 45 meters with no single case of DCS. In summary, the EOW concept describes physiology of decompression after saturation with nitrogen-based breathing mixtures. Introduction Saturation diving is defined as the situation where one is at depth or pressure for a long enough period of time to have the partial pressures of the dissolved gas in the body at equilibrium with the partial pressure of those in the ambient atmosphere [1]. The minimum time for reaching saturation depends on the slowest compartment of the human body, which dissolves inert gas. The saturation process and the rate of change in partial pressure depends on the difference between partial pressures of inert gas in an inhaled breathing mixture and dissolved in a body compartment. So, mathematically, it is described by the half-time of the exponential process, defined as the time it takes for the compartment to take up 50% of the difference in dissolved gas capacity at a changed partial pressure. Practically, after six half-times, every compartment is almost fully saturated (98.44%) [2]. From this point, decompression after exposure is the longest one and no longer depends on time at pressure. From the physical point of view, decompression after saturation is a transition from one steady state-saturation at pressure-to another one: saturation at surface. It is defined by the borderline condition for time spent at a particular depth (pressure) and breathing gas. Saturation decompression is a delicate physiological process. In the human body (assuming standard volume of approximately 80 liters of tissues), there is about 1.37 L of nitrogen (N2) dissolved at atmospheric conditions where partial pressure of nitrogen is 0.79 ata [3]. It means that after saturation exposure using compressed air at 18 m (2.8 ata) there is approximately 2.5 L of nitrogen dissolved in tissues in excess to surface conditions, which must be eliminated gradually during decompression. Standard decompression time after saturation exposure with compressed air at a plateau depth of 18 m (2.8 ata) is about 36 hours [4], which gives the average elimination rate of nitrogen about 1.2 ml/min. This is a low value when compared to other physiological gas exchanges such as the metabolic turn-over of oxygen to carbon dioxide, when about 250 ml of oxygen is consumed each minute to produce about 200 ml of carbon dioxide. Moreover, saturation decompression is a long-lasting process. For nitrogen based gas mixtures, it takes about 18 hours to decompress by 10 meters of depth. Any violation of the physiological desaturation process at the beginning of decompression can lead to the creation of gas bubbles and symptoms of decompression sickness (DCS) many hours later, regardless of whether the diver is still under pressure or has already surfaced [5,6]. For operational purposes, safety of saturation decompression can be easily achieved by slowing down pressure reduction. Such experimental prolongation of decompression had already been used in the past. In 1984, Thalmann reported a development of 18 m (60 fsw) air saturation decompression schedule [7]. Initially, he tried to decompress divers after air saturation using a heliox decompression rate (4 ft/hr to 50 fsw, 3ft/hr to 0 fsw, which gives about 21 hours), but failed, due to the high rate of DCS (four out of ten divers). In a subsequent seven series of 10 or 11 divers, the decompression rate gradually decreased with a reduced rate of DCS until the last series, when no DCS was observed in a group of 10 divers. This last schedule (3 ft/ hr from 60 to 40 fsw, 2 ft/hr from 40 to 20 fsw and 1 ft/hr from 20 fsw to surface, which gives about 33 hours in total) had been accepted and, in fact, after extension by additional time spent at 4 fsw to total of 36 hours, is still binding as Treatment Table 7 [4]. A similar approach was used during extremely deep dives to 650 and 686 meters of the ATLANTIS series, when DCS already occurred during decompression [8,9]. After stopping decompression and slight recompression for the treatment of symptoms, further decompression was commenced at an empirically decreased rate. In both examples, an empirical reduction of decompression rates solved the problem of DCS, but did not lead to an explanation of the physiological process behind it. Despite more than 100 years of extensive research on decompression, we still lack a uniform theory of decompression, which could be used for all types of diving, including deep bounce dives, repetitive dives, and saturation dives. There is no possibility of extrapolating data from non-saturation dives to saturation ones. In 1997, when the US Navy released "new" air decompression tables, Survanshi et al reported that in "old" US Navy tables developed in 1956 and tested with 568 single man-dives and 62 double repetitive-dives, the probability of DCS increased significantly with longer decompression times; and for decompressions lasting longer than 6 hours, it had been accepted to expect 18-36% of DCS [10]. Still, there is a long way to go to saturation decompressions lasting several days. Since then, those decompression tables have been modified several times, but there are always time limits at every depth, from which a dive should be treated as an "exceptional" dive with a higher expected risk of DCS [4]. There is no single decompression system published where saturation decompression is treated the same way as a non-saturation bounce dive; even if from a physiological point of view, there should be no difference. Nevertheless, the trial and error approach has led to a successful definition of operational procedures, and has practically "eradicated" DCS from saturation exposures, which moved diving companies away from research on physiology of saturation and decompression. However, we recently observed that some large organizations are reviving studies in this field [11] (M. Gennser, personal communication). Therefore, we decided to present the complete set of data for the first time, which had been used for the creation of the Polish system of air and nitrox saturation decompressions using the novel concept of the Extended Oxygen Window. It started in the 1980s, but it is still actual and valid, and has never been published in extenso. Model Description After any exposure, the tissue tension (p) of any inert gas at an absolute pressure (P) is simply calculated on the basis that the rate of uptake or elimination of gas by any tissue is proportional to the driving force ΔP, equal to (P-p)-Eq 1. where k is the proportionality constant that embodies the effective resistance of that tissue to gas exchange with circulating blood, but is often expressed as a time to half saturation (halftime or T1/2) (Eq 2) [12]. In their pioneering publication, Haldane et al. advocated using five hypothetical "tissues" with half-times of 5, 10, 20, 40 and 75 minutes to be fairly representative of a continuous spectrum of response time [2]. Moreover, he suggested that after any exposure (including saturation), ambient pressure can be safely halved, and such pressure reduction should not induce DCS. Later on, many modifications to those assumptions were introduced, including an increase in the number of compartments, values of their half-times, maximum allowable pressure reduction after exposure, and differentiation of this pressure reduction between compartments. (For a good review on the development of decompression models, we refer the reader to other studies [13].) Theoretically, from the point of view of saturation decompressions, when all body compartments are fully saturated, the number of parameters that sufficiently describe the elimination of inert gases can be reduced to only two: the maximum allowable gradient for elimination of inert gas max ΔP, and a half-time of the limiting compartment (slowest tissue) for specific inert gas (nitrogen or helium), which limits inert gas elimination ( max T1/2). If we assume that the model should decrease ambient pressure at the same rate that inert gas is eliminated from the slowest (limiting) compartment, the decompression rate can be described as below (Eq 3). In the past, when creating the Polish system of saturation decompressions, we assumed that both parameters ( max ΔP and max T1/2) must be based on physiological values. In summary, the model consists of the following assumptions: 1. Inert gas desaturation from the human body fully saturated at a given pressure is a continuous process limited by perfusion. 2. The maximum allowable rate of pressure reduction (decompression rate or dp/dt) depends proportionally on the maximum persistently allowable elimination gradient from the limiting compartment ( max ΔP) and on the elimination coefficient specific for this compartment (mink), which in turn is inversely proportional to the longest tissue half-time ( max T1/2), according to the following equations: 3. The maximum persistently allowable elimination gradient from the limiting compartment ( max ΔP) corresponds to the Extended Oxygen Window (EOW), which in practice equals to partial pressure of oxygen (PiO2) in inspired breathing mixture according to the following equation: 4. Tissue half-time for elimination of nitrogen from the limiting compartment ( max T1/2) is 360 min, and calculated value of k (minkN2) is 0.001925 min-1 [14,15]. 5. At the beginning of saturation decompression, there is a partial unsaturation in tissues, which allows a faster initial rate of decompression. The value of this unsaturation equals to the EOW, which in turn equals to partial pressure of inspired oxygen (PiO2). The decompression rate during passing of the EOW can be safely increased as it depends on the elimination rate of inert gas from compartments with half-times shorter than 360 min. It is now generally agreed that ΔP for saturation diving is related to oxygen, but in our model this relation goes even further, appointing oxygen as the only factor enabling safe desaturation of inert gas from the human body without creating gas bubbles. Oxygen is consumed in living tissues and this metabolism decreases partial pressure of oxygen in tissues. In return, carbon dioxide is produced; but due to its higher solubility, an increase in partial pressure of carbon dioxide in tissues (and returning venous blood) is of a lesser degree than a decrease in oxygen. Roughly speaking, an oxygen tension of 100 torr, as in arterial blood, corresponds to an approximate content of 3 ml O2 per 100 ml blood. Theoretically, if each oxygen molecule were exchanged during the metabolic process into a carbon dioxide molecule, there would be no change in the dissolved gas volume, but the tension would fall to less than 5 torr, only because carbon dioxide is more soluble than oxygen [16]. This phenomenon has been already described several times by different authors using different terms: partial pressure vacancy [17], inherent unsaturation [18] or oxygen window [19,20]. Values for partial pressures of physiological gases (oxygen, carbon dioxide and nitrogen) and water vapor, as well as unsaturation created by metabolism are presented in Fig 1. When breathing air at normobaric conditions, the physiological unsaturation is about 60 torr and it increases proportionally with PiO2 at least up to PiO2 of 0.9 ata [21]. In our model, we assumed that, when speaking of transport of inert gas in this virtual space, this unsaturation could be safely extended by the water vapor tension (47 torr), which does not participate in the creation of gas bubbles (even if it joins the content of gas bubbles as a consequence of temperature), and at least part of partial pressure of carbon dioxide (45-49 torr), which is highly soluble and chemically active; it can be easily transported bound to hemoglobin and plasma proteins as well as converted to bicarbonate ions. Some elasticity of tissue cells keeping inert gas in a solution to some degree probably plays a role there as well. Therefore, in our model, we assumed that the oxygen window described originally by Behnke as 60 torr, can be safety extended by 90 torr, giving approximately 150 torr when breathing air at normobaric conditions, which equals to the inspired partial pressure of oxygen (PiO2). We call it the Extended Oxygen Window (EOW) concept. The value of max T1/2 is 360 minutes [14,15]. This means that k values are kN2 = 1,925×10 -3 hour-1 (which results in a decompression rate of 5.48 kPa/hrs for nitrox with PiO2 of 50 kPa). So the safe rate of saturation decompression can be easily calculated with Eq 1 using values of ΔP = EOW = PiO2 and T1/2 of 360 min. But using the EOW concept implies also that at the beginning of the saturation, there is already some unsaturation (equal to the EOW), which can be passed at a faster rate as it does not induce any supersaturation. Theoretically, it should be possible to pass this phase in one step, assuming reasonable decompression rate, but-taking into account the primary assumption of physiological rationale-one can assume that the maximum allowable rate of decompression in this phase depends on the maximum allowable out flow of inert gas from fast compartments. Such values have never been measured in any real experiments concerning human saturation decompressions, except where measured during isobaric decompressions at atmospheric conditions (Pamb = 1 ata) when inert gas (nitrogen) was eliminated from human body during oxygen breathing [3]. We used those limits as the maximum allowable elimination out-flow of inert gas from fast compartments. The maximum inert gas flow (IGF) from different compartments can be calculated using the modified 5-compartmental human body model described by Jones (Eq 7) [3]: The summary of compartments is presented in Table 1. In order to conduct Phase 1 of saturation decompression, a decrease of ambient pressure was programmed in order to keep elimination rates of nitrogen from all compartments below the specified maximum (Fig 2, Table 2). In order to confirm the correctness of the EOW model and its parameters (ΔP and T1/2), its parameters have been used for estimation of risk of DCS in published systems of saturation decompressions using air and nitrox, in the range from 7.7 to 50.3 meters, where DCS cases were observed (for references, see Table 3). We calculated the supersaturation exceeding the EOW using parameters of our model (ΔP = PiO2 and T1/2 = 360 min), noting how much time had been spent in each range of supersaturation (in steps of 10 kPa) ( Table 3). If there was no significant supersaturation, which means that driving pressure ΔP was kept below the EOW, or at least not exceeding EOW by more than 10 kPa, there was no single case of DCS reported. When any system allowed for supersaturation to occur, the DCS rate depended on how much supersaturation exceeded the EOW. After a preliminary evaluation of the EOW model, the correctness of the system had to be verified in prospective real saturation exposures. So the experimental aim of the study was to verify by saturation decompressions whether those parameters related to physiological values could be used for programming safe saturation decompressions using compressed air and nitrox in the whole range of depth available for those gases, namely from 18 to 45 meters. Material and Methods According to the model, a decompression profile defined by the EOW is divided into three phases (Fig 3). Phase 1 is a fast initial ascent from saturation plateau through the EOW equal to PiO2 in breathing mixture. It is conducted with a decreasing rate in the time of 90 minutes when using nitrox with PiO2 of 50 kPa or 144 minutes when using compressed air (PiO2 of 56 kPa). The decompression rate changes as described in Table 2 according to maximum inert gas flow from compartments faster than the limiting one. Phase 2 starts when EOW has been passed and the slowest compartment eliminating inert gas starts limiting the desaturation process. In this phase, the decompression rate is proportional to PiO2 with the proportionality coefficient related to the inert gas elimination factor from the limiting (slowest) compartment. For nitrox with PiO2 of 50 kPa, the rate of decompression is 5.48 kPa/hrs (0.548 m/hr, 0.009 m/min). When breathing compressed air, a fractional amount of oxygen is constant, but partial pressure changes, depending on ambient pressure and decompression rate changes, accordingly. Moreover, when breathing compressed air, Phase 2 extends until end of decompression. Otherwise, Phase 3 of decompression starts at a depth of 11 meters, when fractional content of oxygen equals 24% (assuming 50 kPa partial pressure of oxygen). This fractional level should never be exceeded in a hyperbaric environment due to safety reasons in order to avoid fire risk. Therefore, in this phase, the partial pressure of oxygen gradually decreased in order to keep it at level of 23% (±1%). As a consequence, the decompression rate also gradually decreased proportionally to PiO2. When pressure in a hyperbaric chamber was equal to 1.5 m, the decompression rate was manually modified in order to decrease ambient pressure to normobaric pressure in less than 60 minutes (regardless of the breathing mixture and calculated decompression rate). This procedure served as an internal control of correctness of the model and its parameters, as it was expected that slight supersaturation can be safely tolerated if created at the last moment of decompression. If there were any gas bubbles created at the beginning of decompression, such supersaturation would induce symptoms of DCS. This approach was for research purpose only. In standard operations, the calculated decompression rate can be extended up to surface (normobaric pressure). The study protocol was approved by the Ethics Committee of the Medical University of Gdansk and-according to this protocol-all participants provided written informed consent for participation. Young, healthy volunteers (either recreational or professional divers) aged from 20 to 44 years were chosen to participate in the experiments. The exclusion criteria included any chronic disease, and history of decompression disease or injury, which could provoke decompression illness-for example, bone fractures. Some divers had been preliminarny trained in a hyperbaric environment, and some had not. Expositions were made in a dry three-chamber diving simulator (LSH-200) located in the National Center for Hyperbaric Medicine in Gdynia, Poland. The system can be pressurized up to 200 m with any breathing mixture of oxygen, nitrogen and helium. Environmental parameters are controlled by the computerized automatic life-support system-ambient pressure: ±0.1%; partial pressure of oxygen: ±1%; fractional amount of carbon dioxide: 0.3% in samples decompressed to normobaric pressure; temperature: 23-25°C; relative humidity: 40-70%. In every single exposure exposition, two to five diver-testers took part. While staying on the saturation plateau, partial pressure of oxygen in the chamber was kept on a constant level of 40 kPa (± 2.5 kPa), and the content of the atmosphere was controlled by the chemical analysis of decompressed gas samples using the HPLC for any toxic impurities. Every day divers were given a task of 30-minute work on a cycloergometer (100-150 watts) avoiding any significant exercise at least 12 hours before commencing saturation decompression. The shortest stay under pressure was 2.5 days (60 hours) to ensure full saturation with nitrogen. After 2.5-4 days of staying on a plateau, the partial pressure of oxygen in the chamber was increased from 40 kPa to 50 kPa (± 2 kPa), and after 3 hours, a continuous decompression from saturation plateau to surface pressure was commenced using the decompression profile calculated for specific breathing gas of decompression as described above. Due to safety reason of fire prevention, the maximum allowed fractional content of oxygen in the chamber atmosphere was 24% (23 ±1%). After every exposure, divers were tested by a physician responsible for experiments and stayed near the chamber for 24 hours. The decompression was assumed safe unless one or more of the following occured: 1) bends, 2) physiological disturbances, 3) subjective complaints, and 4) abnormal results of basic laboratory blood tests (morphology, platelet counts, liver tests). After 24 hours divers were dismissed but requested to report any symptoms suggesting DCS directly to the researchers. They were also contacted after one week for any late occurring symptoms. Results and Discussion In total, 72 man-expositions were conducted during the study, including 36 man-expositions using compressed air in a range of depth from 18 to 30 m and 36 man-expositions using nitrox (nitrogen and oxygen) in a range of depth from 20 to 45 m. All saturation decompressions are presented in Fig 4 and Table 4. Neither bends nor any other symptoms of decompression stress were observed in any of the 72 expositions. Statistical analysis of obtained results based on a deterministic criterion of the lack of bend symptoms suggested that the probability of occurrence of bends in the employed decompression method does not exceed 0.042 with upper confidence interval of 95%, which means that the upper 95% CI for DCS is less than 4.2% (calculations made by using the rule of three, as there was no positive event [22]). During real exposures with humans, we confirmed that use of physiological parameters based on the Extended Oxygen Window concept (ΔP = PiO2, T1/2 = 360min) for saturation decompressions using compressed air and nitrox in the range of depth from 18 to 45 meters allows safe decompression after very long exposures. The maximum allowable gradient for elimination of inert gas ΔP after saturation exposure has been studied extensively for compressed air in the past. Many different relations between plateau pressure P1 (called also "storage depth" in operational procedures) and maximum allowable reduction of pressure ΔP after saturation stay at plateau pressure P1 have been proposed ( Table 5). The first relation, already suggested by Haldane [2], ΔP = 0.500×P1 is a mathematical equation describing his assumption that one can safely reduce pressure by half even after saturation stay at depth. It suggests that it is possible to surface directly after an infinite stay at a depth of 10 meters when breathing air and a fast ascent for the first 14 meters after saturation with air at 18 meters. No one has ever tried this, but later experiments from even shallower depths induced many cases of DCS. The last equation in the table (ΔP = 0.500×P1-0.500) was proposed in 1967 by Griffiths for caisson workers as a modification of Haldane's assumption, but applied to relative pressure read-outs from manometers scaled in gauge pressure. In fact, this is an example of the introduction of experimentally found safety factors to increase safety of decompression by decreasing pressure reduction. Interestingly, two systems, one used in Poland and the other one described theoretically by Behnke, use equations which give significantly smaller values than others both for DD (2.5 and 1.0 m, respectively) and ΔD18 (5.6 and 4.4 m, respectively) [19,23]. If all relations were expressed in the general form of ΔP = a×P1 + b, only those two equations would show direct connection with oxygen fraction in inspired breathing mixture (FiO2 = 0.2). These relations will be also be discussed later on. The other parameter, which limits the desaturation process, depends on the elimination rate of inert gas from the slowest compartment, and is described in Eqs 1 and 2 as the maximum half-time ( max T1/2) for elimination of nitrogen. Since 1908 when J. S. Haldane proposed 75 min for the slowest compartment based on animal research using goats, many longer values have been proposed in the literature. The evaluation of the max T1/2 for nitrogen is presented in the following table (Table 6). It is noteworthy that these values have been used exclusively for mathematical calculations, so it was never discussed which anatomical tissue limits saturation. Based on animal research, Haldane originally proposed 75 min for divers, the smallest value in the table [2]. This value was also used almost 30 years later [24]. In the 1950s the longest half-time-about 270 min-was established during isobaric desaturation of inert gas while breathing pure oxygen [3], but a similar value-240 min-was proposed and has been used for decades in the US Navy [25], based on bounce (non-saturation) dives. At the same time others proposed values between 120 and 720 min. Later on, in the 1980s, significantly longer values were proposed, even 635 or 720 min [25][26][27][28], just to compensate high values accepted at the same time for ΔP. The longest value-1280 min-was proposed by Miller just for mathematical calculations [25], as there is no reason to expect that in human physiology there is any tissue of such slow perfusion, which could play the limiting role for saturation decompression. After decades of trying different values, several authors using different approaches including hypobaric exposures [29,30], long saturation decompressions [15], and probabilistic decompression model risk predictions using linear-exponential kinetics [31] amicably report that physiological values of T1/2 (even if the specific anatomical compartment has not been identified) are somewhere between 320 and 420 min. As can be seen from the above, the range of max ΔP's (from ΔP = 0.500×P1 to ΔP = 0.500×P1-0.5), max T1/2's (from 75 min to 1280 min) and the resulting value of k (from 0.009242 to 0.000541, respectively) is very wide. So, theoretically, the combination of both ΔP and max T1/2 as describing the decompression rate using Eq 1 is even wider. But, from a physiological point of view, such situation is highly improbable, as there must be one set of limiting values for both parameters for describing a real human desaturation process after saturation. Therefore, we used the EOW concept based on the physiological oxygen window created by metabolism. There is strong physiological rationale behind using this concept for defining the ΔP parameter, which has been already presented in the introduction, and our real saturations directly confirmed the concept. But there is also other evidence supporting this model and its parameters, including the dose-reaction relationship between the supersaturation exceeding the EOW and reported observations of gas bubbles found during direct decompressions after shallow saturations with air [32][33][34]. The dose-response relationship, or exposure-response relationship, describes the change in effect on an organism caused by differing levels of exposure (or doses) to a stressor (overpressure/supersaturation) after a certain exposure time. When we applied the parameters used in our model against those air saturation decompressions reported in literature in the range between 7.7 and 50.3 m, we found the clear dose-response relationship between supersaturation exceeding the extended oxygen window (in steps of 10 kPa) with the rate of decompression sickness (Fig 5). If the supersaturation is kept close to zero level when compared to EOW (in the range of 0 to 10 kPa of supersaturation), there was no single case of DCS reported. In Extended Oxygen Window Concept for Air and Nitrox Sat Decompressions those systems where high supersaturation occurred (exceeding 40 kPa), DCS was observed in as many as 44% of the subjects, even if time spent with such supersaturation was sometimes relatively short (slightly more than one hour). Having such a clear relation between supersaturation exceeding the EOW with rate of DCS means that our model can be used not only for programming of saturation decompressions, but also for estimation of DCS risk for any other saturation systems. Other indirect evidence that supports our model comes from observations of gas bubbles found during direct decompressions after shallow saturations with air. Direct decompression is defined as the depth of saturation after which the diver can surface without need of any decompression other than the reduction of pressure in a few minutes only. It is expected that this value represents the largest overpressure in tissues that can be sustained by a diver. In the past, this value has been used to estimate the driving force for desaturation of inert gas (ΔP) (see Table 5). Taking into account our model, the maximum allowable depth after which divers can conduct the direct decompression to surface after infinite stay at pressure breathing air (FiO2 = 0.21) is only 2.5 meters (ΔP = PiO2). This limit is significantly less than values accepted in other systems. However, there are some reports in the literature that gas bubbles have been observed after direct decompressions to surface after saturation with air at about 8.7-9.0 m (50% DCS, 7 out of 15), 25% DCS (5 out of 19) from depth of 7.7-8.0 m and 16% DCS (22 of 138) deeper than 6 meters [32][33][34]. Even if no DCS was observed during direct decompressions from depths shallower than 6 m (0 out of 448) [35], venous gas emboli (VGE) were noted after direct decompressions from depths as shallow as 6.4, 5.0, and even from 3.8 m (!) [33]. In our model we assumed even more restrictive limitations (2.5 m when breathing air). In the literature, there are two systems described by Behnke and Griffiths that suggest even lower values, 1.0 m and 0.0 m, respectively [19,36]. As mentioned previously, the last equation in the table is the mathematical representation of the practical approach of Griffiths for decompressing caisson workers using the modified Haldane rule (dividing pressure by a factor of 2), but applied to relative pressure read-outs from manometers scaled in gauge pressure instead using absolute pressure as suggested by Haldane. Such a technique gave safer results. For example, divers from a depth of 18 meters could be decompressed to a depth of 9 meters only by Griffiths, which is deeper (and which means safer), then by 14 meters to a depth of 4 meters only, as allowed by Haldane's equation (see the column D18 of Table 5). In fact, Griffiths' approach is an example of the introduction of an experimentally found safety factor to increase safety of decompression just by decreasing pressure reduction. Due to its practical approach, it cannot be considered as valuable information for a description of the physiological processes behind it. The other limit for direct decompression, which is lower than this based on EOW, comes originally from Behnke and his first description of the oxygen window [19]. His equation leads to the declaration that direct decompression can be safely conducted only from a depth of 1.0 meter. Most certainly, it is perfectly safe from a practical point of view. But yet another consequence of the Behnke's relation is that from the air saturation depth of 18 meters, one could decompress divers only by 4.4 meters in Phase 1 of decompression. During our experiments, we decompressed divers from a depth of 18 meters up to 12.5 meters passing the EOW equivalent to a depth change of more than 5.5 meters. If the limit suggested by Behnke were true, the supersaturation created at the beginning of long decompression lasting more than 30 hours almost surely would induce gas bubbles and DCS, at least in some divers. Therefore, it seems that values of ΔP lower than PiO 2 are unnecessarily too restrictive, and values greater than PiO 2 induce gas bubbles. However, it has never been directly related with the physiological concept of the EOW but rather on simple proportionality of the decompression rate to PiO2 with the k coefficient treated as a mathematical parameter only, not representation of the limiting body compartment. As a consequence, according to the best knowledge of authors, the assumption that ΔP equals PiO 2 as a consequence of the EOW has never been implemented explicitly in any existing system for saturation decompression. Even in the US Navy, when Vann proposed a similar theoretical model, he recommended reducing the ascent rate for deeper saturation depths, which is in clear disagreement with the model driven by oxygen when the decompression rate should be independent from depth [38]. Interestingly, the systems published by the US Navy went in an opposite direction, allowing faster ascent rates for deeper saturation depths [4]. It is evident that none of the existing systems require or suggest really fast ascent in the first phase of decompression, namely through the EOW, just using the constant rate of decompression [4,11,44,[46][47][48][49]. The easiest way to recognize whether a system relies fully on the EOW concept is the shape of the decompression profile, especially Phase 1, which is a relatively fast decrease of pressure when passing an inherent unsaturation according to the EOW (Fig 3). As a direct consequence of this acceleration, the difference in decompression times between different systems can be significant (about 20% in case of air saturation at 18 meters of depth) (Fig 6). Theoretically, if one would decompress without acceleration within Phase 1, decompression afterwards at saturation at 18 meters of depth would be longer by more than 20% (6.6 hrs/30.6 hrs). Hypothetically, we cannot exclude that this fast ascent in the first stage of decompression eliminates such a large amount of inert gas dissolved in tissues faster than from the limiting compartment that it creates space for the elimination of inert gas from the limiting compartment in Phase 2 ( Fig 7). If such a phenomenon occurs this would mean that Phase 1 not only increases efficiency of decompression, but also increases safety. This has not been proven directly in our experiments, but the absence of DCS symptoms after many hours of later continuous decompression suggests that indeed, this technique does not induce a significant amount of free gas phase at the beginning of decompression. Comparison of different saturation decompression profiles. PL1 is the real conducted air saturation using the EOW concept; PL2 is a hypothetical decompression profile after air saturation without fast Phase 1. Thalmann and US Navy T7 show air saturation decompression proposed in the US Navy [4,7]. Our study has two main limitations. One limitation is the lack of a direct search for gas bubbles using Doppler monitoring of decompressed divers. Therefore, the main confirmation is based on clinical evaluation of DCS occurrence, and from dose-reaction relationship for those experiments where there were DCS cases. Such an approach allows us to compare our system with other decompression schedules validated using same criteria, but precludes drawing final conclusions concerning the level of decompression stress induced by silent gas bubbles, if any. There are also other indicators proposed as markers of decompression stress after surfacing, including parameters of activation of platelets and fibrinolysis, but none of them was generally approved as a discriminating factor [50][51][52]. Another limitation is that we validated our model only in a physiological range of PiO2 (from 0.4 to 0.6 ata). It can be assumed that in this range, both the oxygen window and the EOW are directly proportional to PiO2 [21,53]. This range is most often used in operational practice, as it does not induce any significant vasoconstriction or toxic effects on lungs or CNS, so it can be kept for months [54]. However, it would be interesting to know how the model behaves outside this range of partial pressure of oxygen. Such knowledge could be of great value also for practical use of the model for planning of saturation decompressions in case of emergency, when high partial pressure of oxygen is empirically used for accelerating desaturation. Some modeling research has already been done [55], some studies on animals have been conducted [56], several procedures have been recommended [57,58], and recently, this method has been investigated in humans [11,59]. Even if "the advantages of oxygen appear far less than predicted by current decompression models" [60], using a high amount of oxygen for accelerating decompression is already recommended in case of emergency [55,61]. In summary, decompression after saturation with nitrogen based breathing mixtures is driven by oxygen. Initially, ambient pressure can be reduced at a higher rate allowing elimination of inert gas from faster compartments using unsaturation created by oxygen metabolism and properties of carbon dioxide and water vapor (the Extended Oxygen Window concept). Then, keeping the driving force (ΔP) for long decompression not exceeding partial pressure of oxygen in inspired breathing mixture (PiO2) allows optimal elimination from the limiting compartment with a half-time of 360 min. gratitude to his achievements. He was actively participating in the study reported there and in preparation of the draft version of the manuscript. He clearly expressed his full agreement with this submission. We acknowledge also Prof. Romuald Olszanski and Prof. Kazimierz Dega from the Department of Maritime and Hyperbaric Medicine, Military Institute of Medicine in Gdynia, Poland for their cooperation during the study.
8,751.4
2015-06-25T00:00:00.000
[ "Engineering", "Medicine" ]
Tetrazole exerts anti-hepatitis effect in mice via activation of PI3K/Akt pathway, inhibition of cell autophagy and suppression of inflammatory cytokine expressions Purpose: To investigate the effect of tetrazole on concanavalin A (Con A)-induced hepatitis in mice, and the underlying mechanism(s). Methods: Thirty 5-week-old, male BALB/c mice (mean weight, 30.5 ± 1.04 g) were used for this study. They were randomly assigned to six groups of five mice each: control group, hepatitis group and four treatment groups. With the exception of control group, hepatitis was induced in all mice with Con A (20 mg/kg) via their tail veins. The treatment groups received varied doses of tetrazole (1.0 6.0 mg/kg) within 1 h after hepatitis induction, while mice in the control group received an equivalent volume of normal saline in place of tetrazole. Serum activities of alanine aminotransferase (ALT) and aspartate aminotransferase (AST) were determined while expressions of interleukin-2 (IL-2), tumor necrosis factor (TNF), and interferon gamma (IFN) were evaluated by enzyme-linked immunosorbent assay (ELISA) kits. Expressions of protein kinase B (Akt), phosphoinositide 3-kinase (PI3K), nuclear transcription factorB (NFB), and autophagy-related genes were determined by real-time quantitative polymerase chain reaction (qRT-PCR) and Western blotting. Results: Con A-induced hepatitis significantly increased the activities of serum ALT and AST in the mice. However, after treatment with tetrazole, the activities of these enzymes were significantly and dose-dependently reduced in the treatment groups, relative to hepatitis group (p < 0.05). The levels of IL-2, IFNand TNFwere significantly increased in hepatitis group when compared with the control group (p < 0.05). However, treatment with tetrazole significantly inhibited the expressions of these parameters. There were no significant differences in the levels of expressions of Akt mRNAs among the treatment groups (p > 0.05). The levels of expressions of LC3II and Beclin 1 were also significantly upregulated in hepatitis group, when compared with control group (p < 0.05). However, expression levels of LC3II and Beclin 1 were significantly and dose-dependently reduced by tetrazole treatment Conclusion: Tetrazole is effective in the treatment of hepatitis via mechanisms involving the activation of PI3K/Akt pathway, inhibition of cell autophagy and suppression of inflammatory cytokines expressions. INTRODUCTION Hepatitis is a liver disease caused by viral infection, exposure to toxins, excessive alcohol consumption and immunological disturbance [1,2]. Its pathogenesis is complex and involves several pathways and molecules [3][4][5]. The disease progression involves activation of T cells which in turn stimulate the secretion of inflammatory cytokines and enzymes in the blood [6,7]. The activities of ALT and AST, and levels of inflammatory cytokines are usually elevated in the blood following the activation of T cells [8,9]. Concanavalin A (Con A), a plant lectin, activates T lymphocytes [10]. Liver damage in hepatitis is caused by aggregation and infiltration of Tlymphocytes [11,12]. In addition, the expressions of IL-2, TNF-and IFN-are involved in the pathogenesis of hepatitis [13]. Nuclear transcription factor-kappa B (NF-B) regulates the expressions of inflammatory cytokines in hepatitis [13]. However, the activity of NF-B is regulated by I B with the involvement of PI3K/Akt pathway [13]. Autophagy, a type of programmed cell death acts by engulfing cellular organelles in the form of autophagosomes and transfering them to lysosomes for degradation [14,15]. The autophagic process is regulated by activation of several pathways such as c-Jun-N and AMPK. The formation of an autophagosome involves Beclin 1 and mTOR pathways which are regulated by the PI3K/Akt pathway [16,17]. The four nitrogen atoms in tetrazole ring are responsible for the biological activity of the compound. Tetrazole-bearing compounds possess chemotherapeutic effects such as antiinflammatory, antimicrobial, anti-nociceptive and anticonvulsant activities [18]. There is a need for the development of new and effective chemotherapeutic agents that can effectively ameliorate the symptoms and complications of hepatitis. The aim of this study was to investigate the effect of tetrazole on Con A-induced hepatitis in mice, and the underlying mechanism(s). EXPERIMENTAL Materials The BALB/c mice were purchased from Beijing HFK Bioscience Co., Ltd., while ALT and AST automated biochemical analyser was a product of Olympus AU1000 (Japan). ELISA kits were purchased from Santa Cruz Biotechnology Inc. (USA), while Kinematica tissue pulverizer was obtained from Shanghai Xin Yu Biotech Co., Ltd. RNeasy Mini kit was purchased from Qiagen, Inc. (USA) and NanoDrop 1000 spectrophotometer was obtained from Thermo Fisher Scientific Inc. (USA). Real time polymerase chain reaction (RT-PCR, 7900HT model) was a product of ABI (USA), while SYBR Premix EX Taq was obtained from Takara Biotechnology Inc. (Japan). Mice A total of thirty 5-week-old BALB/c male mice weighing 28.2 to 32.8 g (mean weight = 30.5 ± 1.04 g) were used for this study. They were housed in plastic cages under standard conditions of animal care and had free access to standard feed and water. The mice were exposed to 12 h light/dark cycles and maintained at 25 ˚C and 48 % humidity. The study protocol was approved by the Laboratory Animal Committee of China Medical University (approval no. CMU/17/187), and the study procedures were carried out according to the guidelines of National Institutes of Health [19]. Treatment The mice were randomly assigned to six groups of five mice each: control group, hepatitis group and four treatment groups. With the exception of control group, hepatitis was induced in the mice with Con A (20 mg/kg) through their tails veins. The treatment groups received varied doses of tetrazole (1.0 -6.0 mg/kg bwt) within 1 h after induction of hepatitis, while mice in the control group received equivalent volumes of normal saline. Determination of activities of ALT and AST, and serum levels of inflammatory cytokines After 12 h of treatment, the mice were sacrificed under isoflurane anaesthesia and blood samples were collected through cardiac puncture. The blood was centrifuged at 3000 rpm for 30 min at room temperature to obtain serum which was used for biochemical analysis. Serum activities of ALT and AST were determined using automated biochemical analyser, while the expressions of IL-2, TNF-, and IFN-were determined using appropriate ELISA kits. Western blotting Liver tissues collected from the mice were stored in liquid nitrogen at -80 ∘ C and sliced into thin sections (5 µm) using refrigerated microtome, and homogenized using Kinematica tissue pulveriser. The resultant tissue homogenate was washed twice with phosphate-buffered saline (PBS) and centrifuged at 13,000 g for 25 min at 4 ˚C. The protein concentration of the supernatant was determined using BCA assay kit. A portion of total tissue protein (20 -30 μg) from each sample was separated on a 12 % sodium dodecyl sulphate (SDS)-polyacrylamide gel electrophoresis and transferred to a fixed polyvinylidene fluoride membrane at 110 V and 90 ° C for 120 min. Subsequently, non-fat milk powder (3 %) in Trisbuffered saline containing 0.2 % Tween-20 (TBS-T) was added with gentle shaking at 37 o C and incubated to block non-specific binding of the blot. Incubation of the blots was performed overnight at 4 ∘ C with primary antibodies of IL-2, TNF-, IFN-, AKT, p-AKT, PI3K, p-P13K, LC3II, Beclin 1 and -actin at a dilution of 1 to 500. Then, the membrane was washed thrice with TBS-T and further incubated with horseradish peroxidase-conjugated goat anti-rabbit IgG secondary antibody for 1 h at room temperature. The blot was developed using an X-ray film. Grayscale analysis of the bands was performed using ImageJ analysis software (4.6.2). Respective protein expression levels were normalized to that of β-actin which was used as a standard reference. Quantitative polymerase chain reaction (qRT-PCR) Total RNAs were isolated from portions of liver homogenate using RNeasy Mini kit and determined spectrophotometrically. The RNAs were reverse-transcribed to cDNAs, using random primers at 45 ˚C for 2 h. The samples were heated at 95 ˚C for 10 min. The PCR amplification of the reverse-transcribed reaction mixture was carried out using 20 μl reaction mixture and equal volume of SYBR Premix Ex TaqTM II. The PCR conditions were: predenaturation at 95 ℃ for 30 sec, denaturation at 95 ℃ for 3 sec, annealing at 60 ℃ for 34 sec, and 50 cycles. The procedure was performed in triplicate. Relative expression was quantified using Stratagene Mx3000P software, and β-actin gene was used as internal reference. The primers sequences used for qRT-PCR are shown in Table 1. Statistical analysis Data are expressed as mean ± SD, and the statistical analysis was performed using SPSS (11.5). Groups were compared using Student's ttest. P < 0.05 was considered statistically significant. Serum ALT and AST Con A-induced hepatitis significantly increased the activities of serum ALT and AST in the mice. However, after treatment with tetrazole, the activities of these enzymes were significantly and dose-dependently decreased in the treatment groups, relative to hepatitis group (p < 0.05). Their activities in hepatitis mice treated with 6 mg/kg bwt tetrazole were 62 ± 19 U/L and 131 ± 32 U/L, respectively ( Figure 1). Figure 1: Effect of tetrazole treatment on the activities of ALT and AST; * p < 0.05, * * p < 0.01 and * * * p < 0.001, when compared to hepatitis group Inflammatory cytokines The expression levels of IL-2, IFN-and TNFwere significantly increased in hepatitis group, when compared with the control group (p < 0.05). However, treatment with 6 mg/kg tetrazole producing the most significant inhibition (p < 0.05). These results are shown in Figure 2 A and B. Expressions of Akt and PI3K As shown in Figures 3 A and B, the expressions of Akt and PI3K were significantly and dosedependently enhanced in treatment groups, when compared with the control and hepatitis groups (p < 0.05). However, there were no significant differences in the levels of expressions of Akt mRNAs among the treatment groups (p > 0.05). The expressions of p-Akt and p-PI3K were also significantly higher in the treatment groups than in the hepatitis group (p < 0.05). Effect of tetrazole treatment on NF-B pathway The expression of NF-B was significantly higher in hepatitis group than in control group, but was significantly and dose-dependently reduced after treatment with tetrazole (p < 0.05). However, the expressions of 1 Bα and 1 Bβ were significantly upregulated in control and treatment groups, relative to hepatitis group (p < 0.05; Figure 4). Expressions of autophagy-related genes The expressions levels of LC3II and Beclin 1 were significantly upregulated in hepatitis group, when compared with control group (p < 0.05). However, LC3II and Beclin 1 were significantly and dose-dependently downregulated by tetrazole treatment, with 6 mg/kg bwt tetrazole producing maximum inhibition ( Figures 5 A and B). DISCUSSION Hepatitis is a serious health condition caused by viral infection, exposure to toxins, excessive alcohol consumption and immunological disturbance [1]. At present, there are no effective therapeutic agents for hepatitis [19]. The present study investigated the effect of tetrazole on Con A-induced hepatitis in mice, and the underlying mechanism (s). Induction of hepatitis leads to release of inflammatory cytokines such as TNF-, IFN-, IL-2 and IL-6 [20]. In this study, the levels of IL-2, IFN-and TNF-were significantly increased in hepatitis group when compared with the control group. However, treatment with tetrazole significantly inhibited the expressions of these parameters, with 6 mg/kg bwt tetrazole producing the most significant inhibition. These results are in agreement with those reported in previous studies [20]. It is possible that tetrazole regulated the secretion of these inflammatory cytokines in Con A-induced hepatitis mice. Increased activities of serum ALT and AST are associated with liver damage [12]. In this study, Con A-induced hepatitis significantly increased the activities of serum ALT and AST in the mice. However, after treatment with tetrazole, the activities of these enzymes were significantly and dose-dependently reduced in the treatment groups, relative to hepatitis group. These results suggest that tetrazole may prevent liver damage in hepatitis by inhibiting the release of inflammatory cytokines. It is likely that the upregulation of inflammatory cytokine expressions enhances the activities of ALT and AST in serum of hepatitis mice. Expressions of genes associated with the secretion of inflammatory cytokines are regulated by NF-B [21]. Nuclear transcription factor-B (NF-B) plays a key role in the expression of proinflammatory genes and the development of hepatitis [22,23]. The results of Western blotting showed that the expression of NF-B was significantly higher in hepatitis group than in control group, but was significantly and dosedependently reduced after treatment with tetrazole. However, the levels of expressions of 1 Bα and 1 Bβ were significantly upregulated in control and treatment groups, relative to hepatitis group. These results suggest that the pathogenesis of hepatitis may involve the degradation of NF-B, and that tetrazole might prevent I B-and I B-degradation. It is likely that treatment with tetrazole suppressed the translocation of NF-B to the nucleus of hepatocytes in hepatitis mice. It has been reported that inhibition of I B-and I Bdegradation plays a central role in downregulation of the expressions of inflammatory factors [24]. Induction of cell autophagy is regulated by several factors, the most common of which are PI3K and Akt [14]. In the present study, the expressions of Akt and PI3K were significantly and dose-dependently increased in treatment groups, when compared with the control and hepatitis groups. However, there were no significant differences in the levels of expressions of Akt mRNAs among the treatment groups. The levels of expressions of p-Akt and p-PI3K were also significantly higher in the treatment groups than in hepatitis group. These results suggest that tetrazole exerts antihepatitic effects via the activation of PI3K/Akt pathway. In this study, treatment with tetrazole significantly down-regulated the expressions of Beclin 1 and LC3II, an indication that tetrazole may exert anti-hepatitis effect via the inhibition of cell autophagy. CONCLUSION Tetrazole is effective in the treatment of hepatitis and its anti-hepatitis effect is exerted via mechanisms involving the activation of PI3K/Akt pathway, inhibition of cell autophagy and suppression of inflammatory cytokine expressions. Conflict of interest No conflict of interest is associated with this work.
3,201.8
2021-05-25T00:00:00.000
[ "Biology", "Medicine", "Chemistry" ]
Free Will and Mental Powers In this paper, we investigate how contemporary metaphysics of powers can further an understanding of agent-causal theories of free will. The recent upsurge of such ontologies of powers and the understanding of causation it affords promises to demystify the notion of an agent-causal power. However, as we argue pace (Mumford and Anjum in Analysis 74:20–25, 2013; Am Philos Q 52:1–12, 2015a), the very ubiquity of powers also poses a challenge to understanding in what sense exercises of an agent’s power to act could still be free—neither determined by external circumstances, nor random, but self-determined. To overcome this challenge, we must understand what distinguishes the power to act from ordinary powers. We suggest this difference lies in its rational nature, and argue that existing agent-causal accounts (e.g., O’Connor in Libertarian views: dualist and agent-causal theories, Oxford University Press, Oxford, 2002; Lowe in Personal agency: the metaphysics of mind and action, Oxford University Press, Oxford, 2013) fail to capture the sense in which the power to act is rational. A proper understanding, we argue, requires us to combine the recent idea that the power to act is a ‘two-way power’ (e.g., Steward in A metaphysics for freedom, Oxford University Press, Oxford, 2012b; Lowe (in: Groff, Greco (eds) Powers and capacities in philosophy: the new aristotelianism, Routledge, New York, 2013) with the idea that it is intrinsically rational. We sketch the outlines of an original account that promises to do this. On this picture, what distinguishes the power to act is its special generality—the power to act, unlike ordinary powers, does not come with any one typical manifestation. We argue that this special generality can be understood to be a feature of the capacity to reason. Thus, we argue, an account of agent-causation that can further our understanding of free will requires us to recognize a specifically rational or mental variety of power. Introduction Free will is puzzling. It seems clear that we have the capacity to control our own actions. But it can seem impossible to comprehend exactly how such a capacity can exist. One of the main obstacles to understanding free will is that it seems to make two opposite demands. Free will is often associated with a lack of determination: an agent's movements do not seem to be up to her if it was already settled long before her birth that she would make them. 1 This intuition undergirds the so-called libertarian view that the existence of free will is not reconcilable with universal determinism. However, undetermined events cannot be up to oneself either, for they would be merely random or accidental. 2 This intuition drives the so-called luck objection to libertarianism. Hence, paradoxically, free will seems to both require and exclude that our actions are necessitated or determined. How can that be? According to one prominent group of philosophers, the key to answering this question lies with agent-causation. 3 Their idea is that human actions are not part of a long causal Niels van Miltenburg and Dawa Ometto have contributed equally to this work. chain of events, but are instead caused directly by agents. Free actions are thus determined in the sense that they are caused by their agents, but undetermined in the sense that they are free from determinations by prior natural events. 4 Clarke, for instance, argues that a human agent is therefore 'in a strict and literal sense an originator of her free decisions, an uncaused cause of them' (Clarke 2003, p. 134). 5 Until recently, 6 agent-causation was often discarded as 'more puzzling than the problem it is supposed to be a solution to ' (van Inwagen 1986, p. 151) or as 'obscure and panicky metaphysics' (Strawson 1962, p. 27). But thanks to recent developments in contemporary metaphysics, it is possible for defenders of agent-causation to argue that this challenge no longer constitutes a serious threat. For they can rely on the emergence of many welldeveloped accounts of power (e.g., Mumford 1998;Ellis 2001;Bird 2007;Marmodoro 2010;Heil 2012). A power, the rough idea is, is a dispositional property of an object or substance that explains why it can exhibit a particular manifestation. Typically such a manifestation comes about when the power is in the right stimulus or manifestation conditions. 7 A bit of salt's dissolving is thus causally explained by pointing out that salt is water soluble, together with the fact that the salt was placed in water. Substances and powers, on such a view, are ubiquitous and hence, many powers metaphysicians claim that causation in general consists in a persisting substance manifesting one of its powers. Therefore, the idea that free human action, too, might be the result of the activity of a substance (the agent) need no longer seem mysterious. Agent-causation would simply be the agent manifesting her power to act. In this paper, we will not be concerned with defending the very idea of agent-causation, or of a metaphysics of powers. Rather, we will ask how we can conceive of the power to act in such a way as to make free action possible. As we will see, this will mean drawing a distinction between intrinsically different kinds of powers. We hope our account of the power to act will thus contribute to a more general understanding of mental powers. 8 Now according to some philosophers, the move towards a metaphysics of powers is all that is required in order to thread the needle between determination and mere randomness. We start by arguing that this is a mistake ( §2). Although the turn towards a powers-based ontology and account of causation is a necessary first step towards making sense of agent causation, we argue, pace Anjum 2013, 2015a), that the very ubiquity of powers, on such a view, undermines the ability of agent-causation to explain the idea that a free action must be up to the agent herself. Therefore, we argue, an account is needed of what distinguishes the power to act from ordinary powers. In section §3, we suggest that the relevant distinction must lie in the rational nature of the power to act. However, we argue, extant agent-causal accounts fail to account for this rationality in the right way. Another fairly recent proposal, that the power to act must be a two-way power is, we believe, on the right track. However, we argue that two-wayness by itself will not provide the understanding we seek of what makes the power to act special-unless we combine the idea that the power to act is two-way with the idea that it is rational. In the final section ( §4), we therefore sketch the outlines of an understanding of the power to act on which its twowayness can be seen to be a consequence of its intrinsically rational nature. On this picture, what distinguishes the power to act is its special generality-the power to act, unlike ordinary powers, does not come with any one typical manifestation. Rather, to what manifestation the power is directed is only determined in an exercise of the power itself. We argue that this special generality can be understood to be a feature of the capacity to reason or infer, as recent work in the philosophy of mind shows (e.g., Rödl 2007;Boyle 2011a). Hence on the resulting conception, the power to act will not be externally determined, nor random, but truly self-determined. Getting Free Will from Powers? In this paper we argue that agent-causalists must explain what is special about the nature of the power to act in order to make headway in the free will debate. Some philosophers, however, seem to think that the move towards a powers metaphysics by itself already furthers our understanding of free will and the dual demands it seems to make. The 5 According to Clarke, this is because agents, qua substances, are 'not the kind of thing that can itself be an effect' (Clarke 2003, p. 134). 6 The idea of agent-causation is certainly not new. It at least goes back to Reid (1999) and was subsequently defended by Chisholm (1966) and Taylor (1973). 7 Although some powers, e.g., a radium atom's power to decay, may be special in that they can manifest indeterministically. This means that they do not need a stimulus, or might not manifest even when they are in the right conditions. We will return to this when discussing two-way powers in §3. 8 Or at least, of the subset of mental powers that are rational powers. most prominent advocates of this idea are Stephen Mumford and Rani Anjum who, in a number of papers Anjum 2013, 2015a), outline how their understanding of powers can positively impact the free will debate. 9 In this section we will therefore consider Mumford and Anjum's account in some detail and argue that it is ultimately unsatisfactory. According to Mumford and Anjum the failure to understand free will comes from a tacit acceptance of what they call 'modal dualism': the idea that there are only two modal values-necessity on the one hand and possibility, or pure contingency, on the other. If everything is a matter of either necessity or contingency, they argue, no sense can be made of free will. For, as we have seen, an action cannot be free if it is random, nor if it is fully necessitated. Now powers, Mumford and Anjum believe, offer a way out of the dilemma between necessity and contingency, because they, on their view, display a third sort of modality. A power does not necessitate its manifestation, because there can be interferences that prevent the manifestation from coming about. A radiator might have the power to heat a room, but might not actually do so because of an open window that lets in a cool night breeze. If the radiator, however, does manage to heat the room, this is not a mere matter of contingency either: a power still has some modal strength to produce its manifestation. For this reason, according to Mumford and Anjum, there must be a sui generis modal value in between necessity and possibility: the dispositional modality. Although one may of course criticize Mumford and Anjum's approach by questioning the cogency (indeed, the very logical availability) of such an in-between modal notion, 10 for our purposes it is more interesting to consider whether the dispositional modality (assuming that sense can be made of it) can help to strike the kind of balance between indeterminism and self-determination that philosophers of free will have been looking for. So although we do not ourselves believe that a positive accounts of powers, or an account of causation in terms of powers, requires what Mumford and Anjum call the dispositional modality, we will (in this section) assume their specific account of powers in order to scrutinize its potential for furthering an understanding of free will. Mumford and Anjum think that their account of powers is mainly beneficial to those who want to defend the incompatibilist perspective on free will, for it is supposed to consistently secure two concrete principles often defended (independently or jointly) by libertarian philosophers. 11 The first is the so-called principle of alternate possibilities (AP), which is the idea that an action cannot be free if the agent could not have acted differently. The second is the principle of ultimate authorship (UA): the idea that an agent must be ultimately causally responsible for her actions. 12 Now, according to Mumford and Anjum AP follows from their account rather simply: if all powers at most tend towards their manifestation, there always is the alternate possibility that the manifestation fails to come about. Obviously, if the exercise of any power at all entails an alternate possibility, then an agent's exercise of her power to act equally delivers them alternate possibilities. Indeed, they write: 'alternate possibilities become entirely ubiquitous, applying in any case of causation and not just those that are the exercise of an agent's powers' (Mumford and Anjum 2015a, p. 8). While Mumford and Anjum seem content with this way of securing alternate possibilities for action, we think that the very ubiquity of Mumford and Anjum's alternate possibilities shows that these are not actually the possibilities libertarians are looking for. The alternative possibilities that Mumford and Anjum have to offer are, it seems, the in principle possibility of an intervention on the manifestation of a power. Their account captures the conceptual truth that the notion of a power or disposition is not the notion of a property that makes the occurrence of the manifestation inevitable, for indeed, there may always be other objects that, by intervening, can prevent the manifestation from coming about. However, note that this conceptual truth will obtain even if in a concrete situation, there is no actual possibility of such an intervention. For instance, if there is in fact no other object close enough to steer a ball of course, the ball's momentum [it's 'disposition to movement' (Mumford and Anjum 2011, p. 6)] will result in its actually moving in a certain direction-even though momentum, as a power, is still the sort of thing on which an intervention is always possible. In short, it seems that Mumford and Anjum mistake the merely conceptual possibility of intervention for the kind of alternative possibilities that are at stake in the debate about free will. 13 For it will be true of every exercise of a power 9 Ruth Groff (2016) is another philosopher who seems to believe that the move to a powers metaphysics directly dissolves some of the core problems surrounding free will. 10 In particular, one might worry that Mumford & Anjum run the risk of confusing causal and logical modalities, and in doing so, confuse what is often called logical with nomological determinism. We thank an anonymous reviewer for pointing this out. 11 See e.g., Kane (1996) for a joint defence. 12 These principles signify the concrete way in which the libertarian tries to make sense of the two seemingly opposite intuitions we have about free will. If I have multiple alternative possibilities my action is not determined, and if I am the ultimate source of my action, it is more than a random event. 13 The relevant alternative possibilities in the free will debate are often described as possibilities 'given the past and the laws of nature' (see e.g., Franklin 2011, p. 204 andMele 2006, p. 9). These are alternative possibilities in a particular concrete situation, in which the prevention of a power may or may not be actually feasible. Mumford 1 3 that (if circumstances had been different) it was in principle possible to prevent that exercise-even if it was, in the actual case, fully determined that the manifestation would occur. 14 Thus the claim that the mere presence of a power makes room for the kind of absence of determination required for free will seems puzzling. And indeed, Mumford and Anjum seem to realize that the kind of 'alternative possibilities' secured by their account are too liberal to suffice for free will because they are ubiquitous: 'the very ubiquity of AP shows that it alone is not what free will consists in' (Mumford and Anjum 2015a, p. 9). That is why, they argue, the second requirement of authorship (AU) must also be fulfilled. Now Mumford and Anjum claim that their account is able to secure the relevant sense of authorship. For they understand an agent's action, e.g., her putting of a golf ball, as a tending (in the sense of an instance of the 'dispositional modality') towards an outcome of a certain sort: the ball dropping into the hole. If the agent succeeds in sinking the shot, then, despite the alternate possibility of failure, the success was still in virtue of her exercise of her agentive power. And therefore, Mumford and Anjum claim, she is the ultimate author of that act. However, this is puzzling. If it is correct to call an agent the author of her action simply because the action is a manifestation of her power, then would it not seem that all cases of power manifestation are cases of authorship? Indeed, if we think that 'the very ubiquity of AP shows that it alone is not what free will consists in', then how could we suppose that something that is equally ubiquitous (on a powers-based or dispositionalist understanding of causality)-namely, the exercise of a power-could secure free will? 15 This is an instance of a more general problem for agent-causal theories that rely on the ubiquity of powers or substance-causation. As we have argued elsewhere (van Miltenburg & Ometto 2016), the agent-causalist's adoption of contemporary powers metaphysics is a bit of a two edged-sword: on the one hand it enhances the acceptability of agent-causation by describing it as a species of the substance causation that occurs throughout nature, rather than as a uniquely human, unnatural, and mysterious phenomenon. But on the other hand, the very ubiquity of substance-causation destabilizes the appeal to agent-causation as the defining feature of human free will. Consider again Clarke's claim that the agent is 'in a strict and literal sense' the uncaused cause, or originator, of her decisions, because the agent as substance is not 'the kind of thing that itself can be an effect' (Clarke 2003, p. 134). Now the problem is that, if substance causation is ubiquitous, any substance manifesting its power would equally seem to become an uncaused cause. Contrast this with Aristotle, who is the inspiration for much of the contemporary metaphysics of powers: 16 The stick moves the stone and is moved by the hand, which again is moved by the man: in the man, however, we have reached a moment that is not so in virtue of being moved by something else. (Aristotle 1996, II.5, 256a6-8) In Aristotle's description of this causal chain, substances-the stick, the hand, the man-are doing the causal work at each step. However, only one of these substancesthe man-is claimed not to be moved by something else: only the man is the uncaused cause. This fact thus does not seem to derive just from the fact that the man is a substance, but rather from the peculiar sort of power that it has, in virtue of being the special kind of substance it is. Analogously, we believe that the most important task for contemporary agent-causalists is to explain what exactly is special about the type or variety of substance-causation in which Aristotle's man is engaged. We turn to the question what might constitute this difference in the next section. 16 As can be seen from the fact that contemporary realism about powers is sometimes even referred to as 'the new Aristotelianism' (Groff and Greco 2013). Footnote 13 (continued) and Anjum sometimes seem to admit that their conceptual possibilities are compatible with determinism (2011, p.75). Other times, they deny this (Mumford and Anjum 2013), but only because they define determinism not as the absence of possibilities 'given the past and the laws' (as it is in the free will debate), but simply as 'causal necessitarianism': the claim that the concept of a cause is the concept of a necessitating condition. We agree that the latter claim is indeed refuted by powers-based or dispositional accounts of causality. Also compare (Anscombe 1971). 14 So, we should note, Mumford and Anjum's claim that alternative possibilities are ubiquitous should be distinguished from the claim that every power is an indeterministic power, like radium's power to decay. It seems that the latter notion can be understood independently from any particular commitments of Mumford and Anjum's account, such as the dispositional modality. We return to the idea of indeterministic powers in §3. 15 In a more a recent paper Mumford and Anjum (2015b) agree that more needs to be said about authorship, albeit in response to a different (and more limited) problem, namely, that the exercise of an agent's own powers might be unfree due to external influences like subliminal advertising or hypnosis. In short, they suggests that agents can take authorship of their power to act, by means of the power to reflect on that power. Although arguing against this proposal is beyond the scope of this paper, it seems unsatisfactory: if one want to understand how the agential power to act is somehow different from the powers that ubiquitously occur in nature, we do not see how it helps to simply afford the agent more powers of that same ubiquitous kind. The Power to Act If powers are ubiquitous in nature and free will is not, then agent-causalists have to explain what the distinguishing feature of the power to act is. As a first step towards answering this question, we will discuss a common, and we believe fundamentally correct, suggestion: that the power to act is different from other powers because it is a rational power. 17 But what does it mean to say that a power is rational? A first suggestion is that we have to understand the rationality of a power in terms of the rationality of its manifestation. This is the direction in which Timothy O'Connor's account of agent-causation goes. He believes that in order to account for the rationality of the agent-causal power, we need to understand it not as a power to directly produce physical movement, but rather as the power to produce, what he calls, an 'action triggering intention'. 18 This intermediate step between the agent and her action indeed seems to provides O'Connor an easy way to account for rationality. For on his view, the action triggering intention is not just an intention 'to A', but its content is rather 'to A for reason R' (O'Connor 2002, p. 351). Hence the intention is itself intrinsically rational. But does the fact that the manifestation of the power to act is a mental state that mentions a reason explain why the power to act itself is rational? We believe that it does not: for it is quite easy to imagine that such mental states could be induced by a non-rational power (i.e the influx of electric current in the brain, or some such). In other words, that the agent-causal power results in a state that may go on to rationalize an action does not mean that the agentcausal power itself is rational. Hence, its manifestation alone cannot account for the rationality of the power to act. Let us therefore consider a second suggestion: a power might be rational when it is responsive to reasons or rational states. O'Connor, for example, stresses that the agent-causal power should not just produce rational states, but should also be responsive to such states: before an agent exercises her agent causal power, she typically deliberates and is aware of the same reason that is part of the resulting intention's content (which we discussed above). Which reasons can enter into the content of the intention thus depends on which reasons the agent was aware of beforehand. But how exactly do these reasons influence the agent's exercise of her power to act? The most obvious candidate for this relation is that the reasons constitute the stimulus conditions of the agentcausal power. On such a view, the power is fundamentally the power to do or intend that A when one considers reasons for A-in the same way that salt has the power to dissolve when placed in water. 19 But then it seems the agent's reasons would simply determine what she does: she would have no choice in the matter. Indeed, O'Connor is (like other agentcausalists) explicit that the agent's reasons cannot bear this kind of relation to the exercise of the power. Instead O'Connor suggests that agent causation is probabilistically structured by reasons: 'coming to recognize a reason to act induces or elevates an objective propensity for me to initiate the behavior' (O'Connor 2005, p. 353). However, this seems to leave unanswered the question of how the agent's consideration of her reasons relate to or impact on the her power to act. If consideration of the reasons is to 'elevate an objective propensity' for the agent to exercise her power in a certain way, then how do they elevate it-if not causally? Moreover, if the reasons would do nothing more than set certain probabilities, then it would seem that the further exercise of the power itself is not guided by reasons in any sense: it would seem to be just a matter of luck which of the probabilities materializes. As Pereboom (2014, p.61) notes, it seems an 'unexplained coincidence' that when an agent has more reason to act, she has a higher probability to, of her own accord, to exercise her agent-causal power. The fundamental problem here seems to be that, as long as we think of the agent's reasons as states existing prior to the manifestation of the power, they appear to be mere circumstances under which the power to act is manifested. These circumstances can then be causally connected to the exercise of the power (which leads to the problem of external determination), or not-in which case the power no longer seems responsive to the reasons at all. 20 If we indeed cannot secure the rationality of the power to act by reference to the supposed special characteristics of either that power's typical manifestations, or its triggers, then how can the agent-causalist account for the difference between the power to act and other powers that is required to overcome the problem of ubiquity ( §2)? We believe that perhaps this difference may be located not, as it were, 17 See, e.g., (Lowe 2013;O'Connor 2000). One might wonder whether this answer to the ubiquity problem excludes certain actions which do not seem rational in a stronge sense (e.g., idly tapping one's fingers) from being free. But it seems to us that if one endorses a sufficiently broad notion of rational action, one can understand such behavior in which one acts 'for no particular reason' as rational in a minimal sense: namely, as an intentional action. Cf. (Anscombe 1957, p. 25). 18 The power to produce such intentions has to be indeterministic, according to O'Connor (2000), but it is not fully clear to us whether O'Connor believes that the causal chain leading from the intention to movement also needs to be indeterministic. outside of the power (in its manifestations or the triggering conditions), but in the power itself. 21 As we will suggest below ( §4), the power to act must be an intrinsically rational power. Before we do so, however, it will first be instructive to consider a recent approach that seems to appreciate the point that the required distinguishing feature of an agent's power to act must be a feature of the kind of power at issue. This is the idea that the power to act must be a so-called two-way power (e.g., Steward 2012b; Lowe 2013; Alvarez 2013). Although we believe there is indeed a close connection between the power to act and the 'two-way' feature that these theorists point to, we argue that there is reason to doubt that this feature indeed ultimately accounts for the intrinsic difference which secures that the power to act is a truly selfdetermining power. Let us begin by considering Steward's version of the idea that agency is a two-way power: the agent is conceived of [...] as a possessor of what is sometimes called two-way power-the power to or not to . Exactly what will occur is not settled in advance by antecedent states and events...It is settled by the agent at the time of action by means of an exercise of a two-way power. (Steward 2012a, p. 250) An action, according to Steward, is a settling of such an antecedently open possibility. Suppose it is open whether or not someone will (say, open the window) at t. Then at t the agent will manifest her two-way power by either performing , or not. Although we do not believe this is false, we doubt that Stewards notion of a two-way power is robust enough: for Steward's notion of a two-way power seems to be too close to the general idea of an indeterministic power. 22 Why can we not say that, e.g., a radium atom also possesses a two-way power-the power to decay or not to decay? At any point before t, it will not be settled yet whether the atom will decay at t or not. When the time comes, this is settled by the radium atom: one way by its decaying, or the other way by its not decaying. The bare idea of settling a hitherto open possibility does not allow us to distinguish the power to act from the indeterministic powers of inanimate objects. 23 Now, there is another way of understanding the suggestion that agency is a two-way power. We should not view the power as one to either perform or not to perform , but rather as a power to perform (or decide) to or perform (or decide) to not-(such that, e.g., 'not opening the window' is the description of one's intentional action, or the content of one's decision). 24 As Lowe remarks, that would provide for the relevant contrast between the power to act 25 and other indeterministic powers: 'a radium atom cannot in any coherent sense refrain from decaying on any given occasion: at most it can simply fail to decay, because it happens not to manifest its power to decay on this particular occasion.' (Lowe 2013, p. 177) We do not wish to deny that it is the special prerogative of rational agents to sometimes refrain from a certain course of action, in a sense in which a radium atom cannot. But note, first, that it does not in general seem correct to say that whenever an agent does not , where is in her power, it follows that she engages in action (i.e., manifests the power) of not-'ing. When the thought of calling one's friend to congratulate her on her birthday crosses one's mind, but one then decides to water the plants, it does not have to be the case that one has decided not to call one's friend, or that one has decided to, at that time, water the plants rather than calling one's friend. One can simply decide to water the plans-thereby in fact refraining from calling one's friend. But in that case, one's refraining will not constitute the manifestation of a power: it will not consist in anything an agent does do. 26 However that may be, we have a more fundamental worry concerning Lowe's proposal. If we consider and not-as two distinct, mutually exclusive prospective actions, can we still make sense of the idea that both are possible manifestations of a single power? After all, powers are directed at their manifestations. But how can one and the same power be directed, at the same time, at contrary effects? 27 21 In footnote 15 we already mentioned another proposal, by Mumford and Anjum, that seemed unsatisfactory because it attempted to locate the distinguishing feature of the power to act outside of the power itself. 22 See fn. 7. 23 The objection that the notion of settling does not seem powerful enough to deliver free will has also been raised by (Broadie 2013). Steward herself is sometimes aware of this problem, claiming that for an agent's settling to be understood as a free act, it must be a case of so-called top-down causation. However, it is not clear that Steward's notion of top-down causation allows her to evade the problem as topdown causation itself is apparently also instantiated by non-free, nonagent substances. Compare our van Miltenburg & Ometto (2016). 24 It may appear that this is the notion that Steward, too, wants to adopt, as she sometimes describes a two-way power as a power 'of refrainment' (Steward 2012b, p. 156). However, she later makes it clear that she intends the weaker notion: '...the relevant possibility is merely that [the agent] should not have made the decision [...] that he in fact made at t. And this is an omission, not an act' (Steward 2012b, p. 170). 25 Lowe speaks rather of 'the will', a power whose characteristic manifestations are volitions ('the most primitive or basic kind of action that any agent can perform' (Lowe 2013, p. 178)), which may then go on to cause the willed action. 26 Compare Steward's (2012b, pp. 170-173) account of refraining. 27 Compare Aristotle's problem about how certain skills, which he famously conceived of as two-way powers, can be aimed at contrary effects in Metaphysics IX.2, 1046b47. His solution is that the faculty of choice provides the two-way power with a direction-but if that is supposed to be a solution, then of course the power of choice itself cannot be two-way in the same way (although as we suggest in §4, it may still be two-way in Steward's sense). Lowe's idea seems to be that this is possible because the agent herself picks out one of the contrary effects-a decision 28 to either perform or refrain from the action-by exercising the power. However, at the same time, he claims that decisions are 'mental occurrences or events' (Lowe 2013, p.172) which, it seems, emerge only as the result of the power's exercise-they are the result, or upshot, of an agent's exercising the power to will to do something. But if decisions indeed are the results of the power to decide, then how can these decisions already play a role in determining which of the contrary effects at which the power aims (i.e., the decision to or the decision to not-) ensues? Must there then be another secondary power to decide the direction in which one's primary power to decide is going to manifest-and so on? The problem with Lowe's proposal thus seems to be that decisions come to the table at too late a stage. 29 The problem is related to that concerning the rationality of the power to act (or to decide) that we discussed above in connection to O'Connor's view. For although Lowe agrees that the power in question must be a specifically rational power, it seems that his account offers no way to understand the relation between an agent's consideration of her reasons for action, and her exercise of the two-way power. Lowe claims that the power is exercised 'in the light of reasons': when deliberating about how to act, an agent reflects on such reasons and then exercises his or her will in a manner that, typically, corresponds to his or her judgement as to where the weight of reasons for or against any particular course of action falls. (Lowe 2013, p. 177) But what explains that the exercise of the power 'typically corresponds' to the preceding reflection on reasons? 30 Moreover, it remains unexplained why on Lowe's view there should (ever, or typically) be any prior consideration of reasons before the two-way power is exercised. The power's being two-way seems compatible with its exercise occuring completely 'in the dark', as we might say. As long as that is the case, it seems, we fail to understand how such a power can be an intrinsically rational power. Self-Determining Power In the previous section we have discussed two suggestions concerning the difference between the power to act and other powers. The power to act is (1) a rational power, and (2) it is a two-way power. However, it turned out to be quite difficult to explain wherein this rationality consists, and how it can help in distinguishing the power to act from other two-way powers (such as radium's power to decay). In this section, we want to sketch the outlines of an approach that promises to vindicate both points. This proposal explores a way to think about the power to act as genuinely free. For if it is a two-way power, then this guarantees that actions are not pre-determined. And if its directedness is rationally controlled, this guarantees that it's exercises are not merely random. In other words, what we hope to offer is a preliminary understanding of how the power to act can be a truly self-determining power. In brief, our suggestion is that we can understand the specifically rational nature of the power to act, as well as the fact of its being a two-way power, by attending to that power's characteristic generality. This generality can be brought into focus by examining an objection to agent-causation that has hitherto received little attention. The objection was originally raised by Clarke (Clarke 2003, pp. 192-193), who worried that, even if we accept a powers-based or substancecausalist conception of causation, an agent's power to act would still appear to have to be a causal power of a different sort-thus undermining the attempt to demystify agentcausation. O'Connor (dubbing it the 'uniformity objection') formulates the relevant disanalogy as follows. Ordinary powers, whether they be deterministic like water's power to dissolve salt or 'two-way' like radium's power to decay, 'are tendencies towards effects, i.e., the powers themselves are disposed to produce effects' (O'Connor 2009, p. 238). In our own parlance: such powers are directed at a certain specific manifestation. Now, what manifestation is the power to act directed at? The answer appears to be: none. For action or acting does not name a specific event. 31 But aren't all powers general in this way? The solubility of salt, for instance, is not 30 As Pereboom (2014, p.61) objected to O'Connor's view, this seems an 'unexplained coincidence'. 31 It might perhaps be thought that agents do not have one general power to act, but rather have many separate powers, to say, play the piano, bake a cake, or butter some toast. But such a view would still have to explain what common feature these separate powers have that makes all of them powers for rational (and arguably free) action. As we have pointed out above, their rationality cannot reside in these powers being reasons responsive, or productive of rational states. Nor is it sufficient to insist that all of these powers are two-way powers. It is therefore that we suggest precisely that generality, and the possibility of self-determination that comes with it, is the defining feature of rationality. But obviously, such a specific power to, e.g., bake a cake is precisely lacking in this kind of generality. We thank an anonymous referee for raising this issue. 28 Or in Lowe's preferred terminology: a volition 29 Lowe sometimes seems sensitive to this worry. He claims that the power to act-or the will, as he calls it-is a 'non-causal power' (Lowe 2013, pp. 174-175). An agent, Lowe submits, does not cause herself to have a certain volition: she just has it, and that is her willing it, i.e, it is her directing the power in a certain direction. However, it is not clear how this is supposed to solve the problem. For isn't the volition still an event that is the result or upshot of the power's exercise, even if we cannot properly say that the agent causes herself to have it? aimed at a specific dissolving either. It does not, by itself, determine, say, how quick the salt dissolves, for that also depends on external factors like temperature of the solvent. Nevertheless, it is important to note that the solubility does determine the types of event that can be its manifestation, be they quick or slow. If the salt for instance melts, it is clear that this was no exercise of its solubility, but rather of its meltability. Now the power to act is more general because it does not even delineate particular event-types, such as raising one's arm, or baking a cake. The concept of action, we might say, is more general than such event-or action-kinds. If agents possess a power to act, it must thus be a power that can in principle be manifested in the performance of a seemingly infinite array of specific action-types. The power itself, it seems, does not favor the occurrence of any of these manifestations: just by possessing the power to act, an agent does not exhibit a tendency towards any of the specific actions that she could in principle perform. So it seems that, rather than saying that the power to act is directed at all of these action-types-a proposal that would be structurally similar to Lowe's construal of a twoway power as a power that is directed at multiple outcomes at the same time-the right thing to say is that the power to act is, by itself, not directed at any specific outcome. 32 When one describes the power to act as 'the power to ', then, there hence is something curious about this: we do not mean that it is a power to perform a specific substitution of the variable , e.g., raise one's arm. Rather, we want to suggest, the power to act is a general power in precisely this sense: it is a power to perform any possible substitution of . 33 Thus the does not stand in for particular action types, but rather signifies that all of its instantiation bear a certain form: the form that we call 'intentional action'. The power to act, if it is directed at anything, is directed at producing events of this form. 34 We thus suggest that the very disanalogy between ordinary powers and the power to act that Clarke and O'Connor notice, and conceive to be a potential problem, in fact constitutes the intrinsic difference between the power to act and other (inanimate) powers. One might wonder, however, whether the power to act still merits the name 'power' if it is so fundamentally different from other powers. 35 How could this so-called power to act, for instance, become manifest in the first place, if it is not aimed at a specific action-type? It seems that for a specific action to count as the manifestation of that power, it would have to be the case that the general power somehow receives a particular specification so that it becomes aimed at a specific type of action. But what could that mean? Not, we submit, that circumstances external to the power and its exercise direct it in one way or another. For it would then simply be a power to do different things, depending on different circumstances, just as the solubility of salt may result is a quicker or slower dissolving process dependent on the external circumstances such as the solvent's temperature. Hence to say that the power to act needs an external determination would simply be to deny that the power to act is general in the sense we are exploring in this section. 36 Rather, we want to suggest, the power to act gives itself a direction in being exercised (or: that the agent gives it that direction by exercising it). As we said, we want to understand the power to act as self-determining. But how does this work? Remember that the power to act is a general power that is therefore not aimed at any particular type of manifestation. It follows that the agent, just in virtue of possessing the power to act, does not exhibit a tendency towards any particular (type of) outcome. In order for such a tendency to come about the agent first has to make up her mind about what to do. Now our suggestion is that it is in exercising her power to act (i.e., as we will see, in her making up her mind) that an agent determines it to now be a power to, say, raise her arm-and thereby acquires the 'tendency' to raise her arm. 37 Hence, because the power to act is not directed at any particular manifestation-since it is a general power-it cannot, so to speak, lie in waiting until its stimulus condition comes about, and then start to manifest. Rather, it seems we 32 In the previous section (3) we argued that it is untenable to suppose that a power could be directed simultaneously at doing and not-. But now we can see that even if Lowe's proposal is misconceived, he was right that there is nevertheless something special about the very directedness of the power to act, which distinguishes it from merely indeterministic powers. 33 We do not mean, of course, that any agent is, just in virtue of being an agent, capable of any action at all: obviously, being an agent does not suffice for being able to swim. The point, rather, is that of all the actions that an agent is able to perform at some time and place, the power to act is not directed at any of them specifically. Moreover, the power to act will of course be a prerequisite of acquiring, say, the skill of swimming (as Aristotle says: 'as regards those things we must learn how to do, we learn by doing them' (Nichomachean Ethics 1103a31).). 34 Although we cannot go into this here, it seems that what it means to be an intentional action-what it is for an event to bear the form which makes it an instance of our variable -is at least in part for it to be a manifestation of the power to act. Something similar is argued, e.g., by Rödl (2007, chapter 2). cannot properly distinguish between activation or triggering of the power, and the power's manifestation-as we do for, e.g., the powers of inanimate objects. 38 To get clearer on what we are exactly recommending let us consider the difference between our proposal and the existing forms of agent-causalism that we have discussed in this paper. From the perspective we are exploring, it seems that accounts such as, e.g., O'Connor's and Lowe's, are attempts to explain what we are taking to be the intrinsic general nature of the kind of power at issue in terms of certain features of the typical manifestations of the agent-causal power. After all, they claim that the power results in contentful states (or for Lowe, volitions), and this content can be of any action whatsoever. But once the power to act has produced such an intention or volition, the latter is, as it were, all on its own: there problematically is no intrinsic connection between what it is to be a state with a certain content, and being an exercise of the power to act. By contrast, on our proposal, it is the power itself that, in being exercised, acquires a specific direction. And so, as we will shortly explain, we can understand this self-determining character of the power to act as a feature of its intrinsically rational nature. Yet before we do so, it seems that we can already say that, if the power to act is self-determining in the sense we are suggesting, it will be a power that is 'two-way', in Steward's sense (see §3). For before its exercise, that power has no specific direction, and so arguably, nothing external to the power could trigger its manifestation. And therefore the power can also fail to manifest in any given circumstance. Thus our proposal gives content to the idea that free actions cannot be determined by prior events. Moreover, we suggest, the power to act is two-way in a sense that distinguishes from merely inanimate two-way powers, such as radium's power to decay. To this end, we would like to explain how the characteristic openness of the power to act, on our view, is bound up with rationality. To do so, it will be helpful to consider recent developments in the philosophy of mind on the capacity for inference. Matthew Boyle (2011a, b), for instance, argues that the capacity for inference (roughly, the power to arrive at beliefs by considering other beliefs) is a capacity that displays precisely the feature we have identified above: when one infers p from q, then the result (or manifestation) is not independent from the activation or triggering of the capacity. We will take a brief look at his argument in order to get in view the analogy between the capacity to infer, or reason, and the power to act that we have in mind. Boyle begins by considering that, at least in the normal case, someone who infers p from q knows that she believes that p because she believes that q. And this knowledge of why one believes p, it seems, is not just an accidental byproduct of the inference: a belief, once formed, doesn't just sit there like a stone. 39 What I believe is what I hold true, and to hold something true is to be in a sustained condition of finding persuasive a certain view about what is the case. [...] inference is not a mere transition from a stimulus to a response; it is a transition of whose terms I am cognizant, and whose occurrence depends on [...] taking there to be an intelligible relation between these terms. (Boyle 2011b, p. 231) That is to say, one's knowledge that one believes that p because one believes that q, and one's actually subscribing to the inference-one's believing that p on the basis of q-are not distinct. Indeed, it seems impossible to believe that p follows from q, and believe that q, without thereby coming to believe that p. 40 Thus, if we think of inference as a power or capacity, we cannot think of one's reasons for believing something as external to the exercise of the capacity. Recognizing q as a reason for believing p (i.e., recognizing that p follows from q) already is to come to believe that p because of q, and so it is to exercise one's capacity to make up one's mind. The explanation for why one comes to believe that p, on the one hand, and the manifestation of the capacity, on the other, do not come apart. 41 That is why, according to Boyle, a subject who makes an inference can normally explain why she possesses the resulting belief. If this brief sketch of Boyle's account of the power of inference is along the right lines, then it seems that inference-the capacity to make up one's mind about what to believe-is self-determining in a sense similar to that in which we have suggested the power to act must be. For 38 Interestingly, Alvarez (2017) has recently argued that something similar is true of the powers or dispositions that are our character traits. Since for having a character trait (courage, say) it is, as Alvarez argues, necessary that one actually displays it, she suggests that the structure of such a power cannot conform to the simple model on which the stimulus conditions, the power itself, and its manifestations are distinct. Although we cannot argue the point here, it may be that this is because such character traits are instances of rational powers, as is the power to act on our view. 39 As we remarked above, it seems that on e.g., O'Connor's account, an intention, once agent-caused, 'sits there like a stone' in precisely Boyle's sense here: it is no longer sustained by the agent's power to act, and hence is not intrinsically active. 40 Of course, there can be odd cases, for instance, when someone fails to realize that her belief that q refers to the same proposition as the antecedent in her belief that p follows from q. But at least in the normal case, it appears that the connection holds. And as Boyle argues, this seems to be essential to what it means to be a rational subject. 41 For a similar account of inference see Rödl (2007, chapter 1), and chapter 2 of that book for an application of this idea to practical reasoning. the power of inference, too, will be directionless until the moment that the subject recognizes that her hitherto held belief that q is a reason for believing that p-and that is, until the moment that the subject makes up her mind that p. Until such time as she actually makes the inference, the subject will not have a 'tendency' to believe either p, or any other belief that may (in fact) follow from q. It is only in making the inference that the power receives its direction. Importantly, this parallel between the structural features of the power of inference and the power to act need not be surprising. For there is a philosophical tradition according to which the power to act, or the will, 'is nothing but practical reason' (Kant 2002, GMS412). And practical reason, or practical inference, is an agent's capacity to derive or infer an action from her ends: it is her capacity to make up her mind about what to do. So we should expect Boyle's considerations, in so far as they apply to theoretical reasoning, to apply equally to the case of practical reasoning. And indeed, to mention one parallel: just as a believer is, in believing p, aware of the reasons for that belief, an agent who is intentionally 'ing is therein aware of the reason why she is 'ing. 42 Our suggestion is thus that the peculiar self-determining character of the power to act is a consequence of its intrinsically rational nature. More precisely, it is a consequence of the fact that making up one's mind about what to believe or do is a self-conscious activity: an exercise of a power for inference, as we have seen above, is not independent from the agent's knowledge that she exercises it. 43 If this is right, then the power to act is not just a power that is exercised 'in the light of reasons', as on Lowe and O'Connor's accountsthe agent's reasons are not just circumstances in which the power is exercised. Instead, an agent's making up her mind in practical reasoning just is her exercising her power to act. In this way we avoid the problem which plagues other agentcausal accounts, namely, that it can seem to be nothing more than an 'unexplained coincidence' (Pereboom 2014, p. 61) that an agent exercises her power in a way correlating with her previous consideration of her reasons. Thus our proposal gives content to the idea that free actions are not random but indeed (self-)determined by reason. This completes our sketch of the power to act. Although it is obvious that much more work is needed in order to give a full fledged account of this power, we now have to respond to the one remaining worry that was behind the socalled 'uniformity objection': If the power to act is indeed general and self-determining as we say, do we not lose the advantage of the substance-causal view-that the ubiquity of (uniform) substance causation makes agent-causation unmysterious? We believe that this is not the case, for the general power to act is still a variety of power, and powers are ubiquitous. Hence, we're still better off in the sense that agents are not the only 'substance causes' in a world that is otherwise filled with 'event causes'. And if our argument is right, then nothing is gained by insisting on pure uniformity. If the metaphysics of powers is to be helpful in understanding a wide range of phenomena, including those that belong to the philosophy of mind, we submit that it would do well to investigate the idea that there is a special variety of rational, self-conscious, and thus self-determining power. Perhaps it can even be hypothesized that the generality of the power to act is something that is not unique to it, but rather a mark of all mental powers. For it seems that, e.g., the power to imagine, is not a power to imagine something particular, and neither is the power to judge a power to make any particular judgement. 44 Moreover, if we are correct in thinking that the power to act is a power of inference, then the apparent mystery one might think surrounds this power will further subside once we attain a better understanding of such inference. Indeed, the idea that we possess a power to act that is undetermined and rational in the sense we have explored in this section can thus be investigated further by inquiring into what practical reasoning is. A number of philosophers have already begun this enquiry, arguing that practical reasoning-for reasons similar to Boyle's argument concerning theoretical inference-an intrinsically rational kind of cause of action (e.g., Rödl 2007;Marcus 2012). Investigations in the philosophy of mind and action, and in the metaphysics of power, thus seem to have much to learn from each other. Research Involving Human Participants or Animals This article does not contain any studies with human participants or animals performed by any of the authors. Informed Consent Informed consent was obtained from all individual participants included in the study. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
13,305.6
2018-11-30T00:00:00.000
[ "Philosophy" ]
Ultraholomorphic extension theorems in the mixed setting The aim of this work is to generalize the ultraholomorphic extension theorems from V. Thilliez in the weight sequence setting and from the authors in the weight function setting (of Roumieu type) to a mixed framework. Such mixed results have already been known for ultradifferentiable classes and it seems natural that they have ultraholomorphic counterparts. In order to have control on the opening of the sectors in the Riemann surface of the logarithm for which the extension theorems are valid we are introducing new mixed growth indices which are generalizing the known ones for weight sequences and functions. As it turns out, for the validity of mixed extension results the so-called order of quasianalyticity (introduced by the second author for weight sequences) is becoming important. Introduction In the authors' recent works [14] and [11] we have shown extension theorems in the ultraholomorphic weight function framework, in the first article for spaces of Roumieu type and in the second one also for the Beurling type classes. Such results have already been known before for the weight sequence approach, see [32]. In [11] we have transferred Thilliez's ideas to the weight function situation (by using ultradifferentiable Whitney extension results) and in [14] we have used complex methods treated by A. Lastra, S. Malek and the second author [18,19] in the single weight sequence approach. In the ultradifferentiable setting also Whitney extension results involving two weight sequences M and N and weight functions σ and ω are known in the literature. In the weight sequence case we refer to [7] for the Whitney jet mapping and to [31] for the Borel mapping, in the weight function case see [5] for the Borel mapping and [17], [25] and [22] for the general Whitney jet mapping. In our recent paper [12], which has served as motivation for this article, by involving a ramification parameter r ∈ N >0 we have generalized the mixed setting results from [31] to r-ramification classes introduced in [30]. We have also generalized the Whitney extension results from [7] by using a parameter r > 0 (see [12,Theorem 5.10]). The possibility of an extension in these mixed settings has been characterized in terms of growth properties of weight sequences and functions. We refer also to Remarks 3.3 and 3.4 where more (historical) explanations will be given. From this theoretical point of view it seems natural to ask whether in the ultraholomorphic framework we can also prove extension results in the mixed settings and this question will be treated in this present work. We will consider Roumieu type classes in both the weight sequence and weight function setting. By inspecting the proofs of the main results in [14], [11] and [32] it has turned out that, up to our ability, only the complex methods from [14] admit the possibility to generalize the result to a mixed situation, see Remark 5.8 below for further details. The existence of ultraholomorphic extension results is tightly connected to the opening of the sectors where the functions are defined. In the previous results [32], [13], [14] and [11] growth indices γ(M ) and γ(ω) have been introduced to measure the maximum size of these sectors, for a detailed study and comparison of these values we refer to [10]. Then a similar notion is required in the mixed setting to obtain satisfactory theorems. Therefore, motivated by the occurring mixed ramified conditions between M and N and their associated weight functions ω M and ω N appearing in [12] the definition of the mixed growth index for sequences γ(M, N ) and for weight functions γ(σ, ω) has been given, see Section 3.1. Under restrictions of the opening of the sectors in terms of these indices, we have stated the main extension result, Theorem 5.7, for a pair of two given weight functions, using the weight matrix tool described and used in [28] and [23]. Then the results are transferred to the weight sequence case thanks to the associated weight functions. Compared with the previous known extension results for weight functions (in the ultraholomorphic setting) we will also treat "exotic" cases here, more precisely: The growth property ω(2t) = O(ω(t)) as t → +∞, denoted by (ω 1 ) in this article, will not be needed in general anymore in the mixed situation. This property is usually a very basic assumption when working with (Braun-Meise-Taylor) weight functions ω and it is equivalent to having γ(ω) > 0 as shown by the authors in [14]. Moreover (ω 1 ) has also been used to have that the class defined by ω admits a representation by using the so-called associated weight matrix Ω, see Section 2.4 for a summary. Our main extension result Theorem 5.7 is formulated between ultraholomorphic classes defined by weight matrices and we are able to treat such a general situation since in [14] we have worked with weight functions and their associated weight matrices also in a "nonstandard" setting, i.e. not assuming (ω 1 ) necessarily. More detailed explanations will be given in Remark 5.1 below. In Appendix A such nonstandard examples will be constructed explicitly and underlining the different situation in our work here. In the preceding extension results for one sequence, the opening of the sector where the functions are defined is at most πγ(M ). As it will be seen in Section 3, for any sequences M and N satisfying standard assumptions the mixed index γ(M, N ) is always belonging to the interval [γ(N ), µ(N )], where µ(N ) is denoting the so-called order of quasianalyticity introduced by the second author, see [26] and [13]. We know that even for strongly regular sequences N one can have γ(N ) < µ(N ) and the gap can become as large as desired, see Remark 3.12. In these situations, we can provide an extension map for any opening πγ with γ(N ) ≤ γ < µ(N ) by limiting the size of the derivatives at the origin in terms of a smaller sequence M . Furthermore, this sequence M can be chosen optimal in some sense, thanks to a modified version of the technical construction in [24,Section 4.1]. Hence we can show that the Borel map will be not surjective necessarily anymore but admitting a controlled loss of regularity, so that µ(N ), usually related to the injectivity of the Borel mapping, does have also a meaning associated with the surjectivity. For weight functions the situation is analogous by introducing the order µ(ω) in Section 3.10. The paper is organized as follows: First, in Section 2 all necessary notation and conditions on weight sequences and functions used in this article will be introduced. In Section 3 we will define and study the new mixed growth indices γ(M, N ) and γ(σ, ω) and investigate also the connection of these values to the orders µ(N ) and µ(ω). In Sections 4 and 5 we will transfer the results from [14] to the mixed settings and providing only the necessary changes in the proofs, the main results will be Theorem 5.7 for the general mixed weight function case, Corollary 5.10 for mixed Braun-Meise-Taylor weight functions having (ω 1 ) and Theorem 5.12 for the mixed weight sequence case. In Section 6 we will prove mixed extension results fixing only the weight that defines the function space for any sector with opening smaller than πγ(·), see Theorems 6.2 and 6.4. Finally, in the Appendix A, we are providing some (counter-)examples showing γ(M, N ), γ(σ, ω) > 0, but such that all nonmixed indices γ(·) are vanishing, see Theorem A.3. Ultradifferentiable classes defined by weight sequences and functions Similarly we will use this notation for sequences N, S, L as well. M is called normalized if 1 = M 0 ≤ M 1 holds true and which can always be assumed without loss of generality. For any given weight sequence M and r > 0 we will write M 1/r := ( If M is log-convex and normalized, then M and the mapping j → (M j ) 1/j are nondecreasing, e.g. see [27,Lemma 2.0.4]. In this case we get M k ≥ 1 for all k ≥ 0 and We can replace in this condition M by m and by M 1/r (r > 0 arbitrary) by changing the constant C. More generally, for arbitrary r > 0 we call M to be r-nonquasianalytic, denoted by (nq r ), if and so M has (nq r ) if and only if M 1/r has (nq). Due to technical reasons it is often convenient to assume several properties for M at the same time and hence we define the class M ∈ SR, if M is normalized and has (slc), (mg) and (γ 1 ). Using this notation we see that M ∈ SR if and only if m is a strongly regular sequence in the sense of [32, 1.1] (and this terminology has also been used by several authors so far, e.g. see [26], [19]). At this point we want to make the reader aware that here we are using the same notation as it has already been used by the authors in [14] and [11], whereas in [32] and also in [10] the sequence M is precisely m in the notation in this work. (5) For two weight sequences M = (M p ) p and N = (N p ) p we write M ≤ N if and only if M p ≤ N p ⇔ m p ≤ n p holds for all p ∈ N (and similarly for the sequence of quotients µ and ν) and write M N if In the relations above one can replace M and N simultaneously by m and n because M N ⇔ m n. Some properties for weight sequences are very basic and so we introduce for convenience the following set: It is well-known (e.g. see [24,Lemma 2.2]) that for any M ∈ LC condition (mg) is equivalent to sup p∈N µ2p µp < ∞ and to sup p∈N>0 µp+1 (Mp) 1/p < ∞. A prominent example are the Gevrey sequences G r := (p! r ) p∈N , r > 0, which belong to the class SR for any r > 1. Moreover we consider the following conditions, this list of properties has already been used in [28]. An interesting example is σ s (t) := max{0, log(t) s }, s > 1, which satisfies all listed properties except (ω 6 ). It is well-known that the ultradifferentiable class defined by using the weight t → t 1/s coincides with the ultradifferentiable class given by the weight sequence G s = (p! s ) p∈N of index s > 1. Let σ, τ be weight functions, we write σ τ if τ (t) = O(σ(t)) as t → +∞ and call them equivalent, denoted by σ ∼ τ , if σ τ and τ σ. Motivated by the notion of a strong weight function given in [3] ω will be called a strong weight, if ω ∈ W 0 and in addition (ω snq ) is satisfied. Concerning condition (ω nq ) we point out that hence it makes sense to consider the following generalization (ω nq r ) (analogously to (nq r )): Then ω r has (ω nq ) if and only if ω has (ω nq r ). 2.3. Weight matrices. For the following definitions see also [23,Section 4]. Let I = R >0 denote the index set (equipped with the natural order), a weight matrix M associated with I is a (one parameter) family of weight sequences M : For convenience we will write (M) for this basic assumption on M. We call a weight matrix M standard log-convex, denoted by (M sc ), if M has (M) and Moreover, we put m x p := 2.4. Weight matrices obtained by weight functions. We summarize some facts which are shown in [23, Section 5] and will be needed in this work. All properties listed below will be valid for ω ∈ W 0 , except (2.3) for which (ω 1 ) is necessary. (i) The idea was that to each ω ∈ W 0 we can associate a (M sc ) weight matrix Ω := {W l = (W l j ) j∈N : l > 0} by W l j := exp 1 l ϕ * ω (lj) . In general it is not clear that W x is strongly log-convex, i.e. w x is log-convex, too. (ii) Ω satisfies In case ω has moreover (ω 1 ), Ω has also Consequently (ω 6 ) is characterizing the situation when Ω is constant. For an abstract introduction of the associated function we refer to [20, Chapitre I], see also [15, Definition 3.1]. If lim inf p→∞ (M p ) 1/p > 0, then ω M (t) = 0 for sufficiently small t, since log t p Mp < 0 ⇔ t < (M p ) 1/p holds for all p ∈ N >0 . Moreover under this assumption t → ω M (t) is a continuous increasing function, which is convex in the variable log(t) and tends faster to infinity than any log(t p ), p ≥ 1, as t → +∞. lim p→∞ (M p ) 1/p = +∞ implies that ω M (t) < +∞ for each finite t and which shall be considered as a basic assumption for defining ω M . For all t, r > 0 we get The functions h M and ω M are related by g. see also [7, p. 11]). If M ∈ LC, then M has (mg) if and only if Lemma 2.8. Let ω ∈ W 0 be given and Ω = {W l : l > 0} the matrix associated with ω. Then we have 2.9. Classes of ultraholomorphic functions. We introduce now the classes under consideration in this paper, see also [14,Section 2.5] and [11,Section 2.5]. For the following definitions, notation and more details we refer to [26,Section 2]. Let R be the Riemann surface of the logarithm. We wish to work in general unbounded sectors in R with vertex at 0, but all our results will be unchanged under rotation, so we will only consider sectors bisected by direction 0: For γ > 0 we set i.e. the unbounded sector of opening γπ, bisected by direction 0. Let M be a weight sequence, S ⊆ R an (unbounded) sector and h > 0. We define Similarly as for the ultradifferentiable case, we now define ultraholomorphic classes associated with a normalized weight function ω satisfying (ω 3 ). Given an unbounded sector S, and for every l > 0, we first define < +∞}. (A ω,l (S), · ω,l ) is a Banach space and we put A ω,l (S). In any of the considered ultraholomorphic classes, an element f is said to be flat if f (p) (0) = 0 for every p ∈ N, that is, B(f ) is the null sequence. Mixed growth indices for extension results 3.1. The indices γ(M, N ) and γ(σ, ω). First, for r > 0 we introduce the following condition which will be denoted by (γ r ), see [30] for r ∈ N >0 and [32, Lemma 2.2.1] for r > 0: It is immediate that M has (γ r ) if and only if M 1/r has (γ 1 ). In [32, Definition 1.3.5] the growth index γ(M ) has been introduced (for strongly regular sequences and using a definition which is not based on property (γ r ) directly). In , and µ p ≤ Cν p ≤ Cν k for all 1 ≤ p ≤ k. (ii) Moreover, in (M, N ) γr we can equivalently consider In order to see how these definitions have been motivated, we are describing next the appearance of such (non-)mixed relations in the literature. Remark 3.3. Condition (γ 1 ) has appeared as (standard) condition (M 3) in [15] and in [21] where it has been used to characterize the validity of Borel's theorem in the ultradifferentiable weight sequence setting. Condition (M, N ) γ1 has appeared in the mixed weight sequence situations in [7] (for the Whitney jet map) and in [31] for the Borel map. More precisely in [31] it has turned out that the characterizing condition is not (M, N ) γ1 directly, but does coincide with this condition whenever M has (mg) (as it has been assumed in [7]), see also Remark 3.8 below. In [30], condition (γ r ) has appeared (for r ∈ N >0 ) and it has also been used by the authors in [13]. In these works (γ r ) played a key-role proving extension theorems for ultraholomorphic classes defined by weight sequences since one is working with auxiliary ultradifferentiable-like function classes first defined in [30]. In [32, Lemma 2.2.1] this condition has been introduced for r > 0 arbitrary and a connection to the value γ(M ) has been given. Finally, condition (M, N ) γr has appeared in the recent work by the authors [12] (mainly again for r ∈ N >0 ). There we have generalized the results from [31] to the auxiliary ultradifferentiable-like function classes, moreover in [12, Theorem 5.10], we have given a generalization of the ultradifferentiable Whitney extension results from [7] involving a ramification parameter r > 0. Now we turn to the weight function situation. Let ω be a weight function and r > 0, we write (ω γr ) if with γ(ω) denoting the growth index used and introduced in [14], [11] (by considering a different growth property of ω which is not based on (ω γr )). Note also that 1 γ(ω) does coincide with the socalled upper Matuszewska index, see [1, p. 66]. For a more detailed study of γ(ω) and its connection to the indices studied in [1] we refer to Section 2 in the authors' recent work [10]. Remark 3.4. (ω γ1 ), which is precisely (ω snq ), has appeared for ω = ω M in [15], and in [3] this condition has been characterized in terms of the validity of the ultradifferentiable Whitney extension theorem in the weight function setting. The mixed condition (σ, ω) γ1 has been treated in [5] for the Borel map and in [25] and [22] for the general Whitney jet map (see also [17] for compact convex sets). In these works, condition (σ, ω) γ1 has been identified as the characterizing property. Finally, in [12, Theorem 5.10] we have introduced (ω M , ω N ) γr in order to prove a generalization of the ultradifferentiable Whitney extension results from [7] (again by involving a ramification parameter r > 0). Lemma 3.5. Let M, N ∈ LC be given with µ p ≤ ν p and ω, σ be weight functions with σ ω. Then we have Proof. First, if γ(N ) = 0, γ(ω) = 0, then the conclusion is clear. If these values are strictly positive, then for any 0 < r < γ(N ), γ(ω) we get that (γ r ) for N and (ω γr ) for ω hold true and so also (M, N ) γr and (σ, ω) γr are valid (for any M having µ p ≤ ν p , σ having σ ω). We recall the next statement which has been shown in [12,Lemmas 5.8,5.9] in order to see how γ(M, N ) and γ(ω M , ω N ) are related. This result is the generalization of [10, Corollary 4.6 (iii)] to the mixed setting. Lemma 3.7. Let M, N ∈ LC be given with µ p ≤ ν p (and which is equivalent to µ r p ≤ ν r p for all r > 0 and implies M r ≤ N r ). Assume that (M, N ) γr holds true for r > 0. Then the associated weight functions are satisfying Consequently, for sequences M and N as assumed above, we always get . In [12] we have generalized this condition to We summarize several more properties. 3.10. Orders of quasianalyticity µ(N ) and µ(ω). In the ultraholomorphic weight sequence setting another important growth index is known and related to the injectivity of the asymptotic Borel map, the so-called order of quasianalyticity. It has been introduced in [26, Def. 3.3, Thm. 3.4], see also [9], [13] and [10]. We use the notation from [10] to avoid confusion in the weight function case below and to have a unified notation (coming from [1, p. 73]). For given N ∈ LC we set (3.2) µ(N ) := sup{r ∈ R >0 : [13, p. 145]. If none (nq r ) holds true, then we put µ(N ) A first immediate consequence is the following: Proof. Note that for r > µ(N ) property (M, N ) γr cannot be valid for any choice M (see (iii) in Remark 3.9). According to this observation one can ask now the following question: Is it possible to get extension results for values γ > 0 with γ(N ) ≤ γ < µ(N )? As we will see in Section 5, for values γ < γ(M, N ) we can prove extension results in a mixed setting between M and N but it is still not clear how large the gap between γ(M, N ) and µ(N ) can be in general. Given N ∈ LC and r > 0 we consider If for none r > 0 (3.3) holds true, then the sup in (3.4) equals 0. As commented in Remark 3.2, given N ∈ LC and r > 0 with having (3.3) for some choice M ∈ LC, µ p ≤ Cν p , then this M is sufficient to guarantee (3.3) for all 0 < r ′ < r as well. In [26,Theorem 3.4] (see also (3.2)) it has been shown that for any N ∈ LC we have Thus for given N ∈ LC and 0 < r < µ(N ) we see that ν p ≥ p r for all p ∈ N sufficiently large and Cν p ≥ p r for all p ∈ N by choosing C large enough. Consequently, in (3.3) the choice M ≡ G r , i.e. the Gevrey sequence with index r > 0, does always make sense and the next result is becoming immediate: Proposition 3.13. Let N ∈ LC be given, then Proof. If µ(N ) = 0, then we have obviously equality. So let now µ(N ) > 0. First, if 0 < r < sup{r ∈ R >0 : (3.3) is satisfied}, then the choice p = 1 in (3.3) immediately implies (nq r ) for N , hence r ≤ µ(N ) and so the first half is shown. Conversely, let r < µ(N ) be given, then (3.3) is satisfied for µ p = p r and which can be taken as seen above. Hence µ(N ) ≤ sup{r ∈ R >0 : (3.3) is satisfied} is also shown and we are done. A disadvantage of taking directly M ≡ G r is that it is not clear that this precise choice is optimal in the sense that it is the largest sequence µ p ≤ Cν p admitting (3.3). To obtain this optimal sequence we recall the following construction: In [24, Section 4.1], and which is based on an idea arising in the proof of [21, Proposition 1.1], it has been shown that to each N ∈ LC satisfying (nq) we can associate a sequence S N with good regularity properties and which has been denoted by descendant. For the reader's convenience we recall now the construction in the following observation and are involving a ramification parameter r > 0 as well. We have that s N,r ≤ Cs N,r ′ ⇔ S N,r ≤ CS N,r ′ for all 0 < r ′ ≤ r (since r → τ r k is increasing for all k ∈ N fixed). Hence L N,r ∈ LC and moreover → 0 as k → +∞ and so L N,r is strictly larger than G r . As mentioned above, condition (mg) for N does always imply this property for the descendant S N,1 =: S. However we can obtain a precise characterization of this growth behavior. Proof. Since S ∈ LC, this sequence has (mg) if and only if sup k∈N σ 2k σ k < ∞, e.g. see [24,Lemma 2.2]. By the definitions given in Remark 3.14 we get that σ 2k ≤ Dσ k ⇔ τ k ≤ D 2 τ 2k (with τ k ≡ τ 1 k ) and which is equivalent to k ν k + j≥k νj and so to having νj and so finally (mg) is equivalent to The sum on the left-hand side above is estimated by below by k ν 2k and by above by k ν k (since (ν k ) k is increasing). Hence (3.7) implies ν 2k In this case S is equivalent to N and so S has (mg) if and only if N has this property. (iii) Instead of having (3.6) one can study the more "compact and easy to handle" requirement νj ≥ ε for some ε > 0 and all k ∈ N we immediately get that (3.8) implies (3.6). However and which tends to infinity as k → ∞. Finally let us show that (3.8) holds (and so (3.6)). Let now k ∈ N be given with p! ≤ 2k < (p + 1)!, p ≥ 2. We split the sum ν k k j≥2k and remark that both summands are nonnegative for all k ∈ N under consideration. We study the second summand and distinguish between two cases. If k < p!, then we have k ≥ p!/2 ≥ (p − 1)! and so ν k k If p! ≤ k, then we can estimate by ν k k Thus the descendant S does have (mg). But since N violates this property, N cannot be equivalent to S and so N does not satisfy (γ 1 ). Similarly, there does exist also an inverse construction concerning the descendant, called the predecessor. However, this does not provide any new insight, see Remark 6.6. This growth index likely will have an interpretation for the quasianalyticity of classes of ultraholomorphic functions defined in terms of weight functions, more precisely in order to prove analogous results to [13], see also [26] and [8]. Let ω and r > 0 be given and assume that (3.10) holds with some σ, then the same choice is sufficient to have (3.10) for all 0 < r ′ < r as well. Lemma 3.11 turns now into: Lemma 3.17. Let σ and ω be weight functions (in the sense of Section 2.2) with σ ω. Then we get γ(σ, ω) ≤ µ(ω). Proof. For any r > µ(ω) we see that (ω nq r ) is violated and so (σ, ω) γr cannot be valid for any choice σ (see (iii) in Remark 3.9). The next result is analogous to Proposition 3.13 and showing that µ(ω) is the upper value for our considerations. Proof. If µ(ω) = 0, then we have obviously equality. So let now µ(ω) > 0. We are closing this section by establishing now the connection between µ(ω) and µ(W l ), with W l ∈ Ω and Ω denoting the matrix associated with ω. Proof. First, given r > 0, by the formula on p. 7 in [11] we know that the matrix associated with the weight ω r coincides with the set {V l,r := (W l/r ) 1/r : l > 0}, i.e taking the r-th root of the sequences belonging to Ω and making a re-parametrization of the index (in terms of r). Construction of outer functions. The aim of this paragraph is to obtain holomorphic functions in the right half-plane of C whose growth is accurately controlled by two given weight functions ω and σ. Since in the forthcoming sections we want to treat the weight sequence and weight function case simultaneously we will transfer the general proofs from [14, Section 6] to a mixed setting. First we translate (σ, ω) γ1 into a property for σ ι and ω ι (recall ω ι (t) = ω(1/t) In particular (4.1) holds true for each ω satisfying (σ, ω) γ1 (and σ some other possibly quasianalytic weight). In the next step we are generalizing [14,Lemma 6.3] to a mixed setting, the idea of the construction in the proof is coming from [32, Lemma 2.1.3]. Lemma 4.4. Let ω and σ be two weight functions satisfying (σ, ω) γ1 . Then for all a > 0 there exists a function F a which is holomorphic on the right half-plane H 1 := {w ∈ C : ℜ(w) > 0} ⊆ C and constants A, B ≥ 1 (large) depending only on the weights ω and σ such that Proof. We are following the lines of the proof of [14,Lemma 6.3], see also [32, Lemma 2.1.3] for the single weight sequence case. Since (σ, ω) γ1 is valid, the weight ω has to satisfy (ω nq ). Hence for w ∈ H 1 we can put we need only consider in the proof a = 1 and put for simplicity F := F 1 . where f (t) := ω ι (|t|), g u (t) := u/(t 2 + u 2 ). f and g u are symmetrically nonincreasing functions, hence the convolution too, and as argued in [14,Lemma 6.3] this means that for w → log(|F (w)|) the minimum is attained on the positive real axis and we have for all w ∈ H 1 : For the left-hand side in (4.2) consider K > 0 (small) and get The first integral is estimated by since t → −ω ι (t) is nondecreasing and since by (σ, ω) γ1 we have σ ω. Thus −ω ι (t) ≥ −D(σ ι (t)+1) for some D ≥ 1 and all t > 0 follows. (I) Let M, N ∈ LC be given such that µ p ≤ ν p and γ(M, N ) > 0 holds true. Then for any 0 < γ < γ(M, N ) there exist constants K 1 , K 2 , K 3 , K 4 > 0 depending only on M , N and γ such that for all a > 0 there exists a function G a holomorphic on S γ and satisfying Moreover, if N has in addition (mg), then G a ∈ A { N } (S γ ) with N := (p!N p ) p∈N (and G a is flat at 0). If M has in addition (mg), then there exists K 5 > 0 depending also on given a > 0 such that Let ω and σ be weight functions such that γ(σ, ω) > 0 holds true. Then for any 0 < γ < γ(σ, ω) there exist constants K 1 , K 2 , K 3 > 0 depending only on σ, ω and γ such that for all a > 0 there exists a function G a holomorphic on S γ and satisfying Moreover, if ω is normalized and satisfies (ω 3 ), then G a ∈ A { Ω} (S γ ) (and G a is flat at 0), more precisely where Ω = {W x : x > 0} shall denote the matrix associated with ω and Ω := { W x = (p!W x p ) p∈N : x > 0}. If σ ∈ W 0 , then there exist an index x > 0 and a constant K 4 > 0 depending also on a such that where S x ∈ Σ, Σ the matrix associated with σ. Proof. We will give some more details for the proof of (I) (following the lines of [32, Theorem 2.3.1]). So we can use Lemma 4.4 for ω M s ≡ σ and ω N s ≡ ω and obtain a function F a holomorphic on the right half-plane and satisfying (A|w|) . Then put G a (ξ) = F a (ξ s ), ξ ∈ S δ . Note that, as sδ < 1, the ramification ξ → ξ s maps holomorphically S δ into S δs ⊆ S 1 = H 1 , and so G a is well-defined. We show that the restriction of G a to S γ ⊆ S δ satisfies the desired properties by proving that (4.3) holds indeed on the whole S δ . First we consider the lower estimate. Let ξ ∈ S δ be given, then ℜ(ξ s ) ≥ cos(sδπ/2)|ξ| s (since sδπ/2 < π/2). If B ≥ 1 denotes the constant coming from the left-hand side in (4.2) applied to the weight ω M s (or see (4.8)), then for all t, s > 0, see (2.4), and finally (2.6). Now we consider the right-hand side in (4.2) respectively in (4.8) and proceed as before. Let A be the constant coming from the right-hand side of (4.2) applied to ω N s , so and (4.3) has been proved for every ξ ∈ S δ . In order to show G a ∈ A { N } (S γ ) we put in the estimate above A 1 := A 1/s and see If we can show then by applying [14, Lemma 6.4 (i.1)] we see that G a ∈ A { N } (S γ ) (and it is a flat function at 0). Since h N ≤ 1, (4.9) holds true whenever sa 2 ≥ 1 ⇔ sa ≥ 2. But in general we have to use (mg) for N and iterate (2.7) (applied for N ) l-times, l ∈ N chosen minimal to ensure sa 2 ≥ 1 2 l . The proof of (4.4) follows analogously by iterating (mg) for M (if necessary) in order to get rid of the exponent 2aK 2 . The remaining statements, in particular the estimate (4.7), follow analogously as in the proof of [14,Theorem 6.7], replacing τ by ω or σ in the arguments. Right inverses for the asymptotic Borel map in ultraholomorphic classes in sectors The aim of this section is to obtain an extension result in the ultraholomorphic classes considered in a mixed setting for both the weight sequence and the weight function approach following the proofs and techniques in [14,Section 7]. The existence of the optimal flat functions G a obtained in Theorem 4.6 will be the main ingredient in the proof which is inspired by the same technique as in previous works of A. Lastra, S. Malek and the second author [18,19] in the single weight sequence approach. Although for the general construction the weight functions σ and ω need not be normalized, we are interested in working with the weight matrices associated with them, which will be standard log-convex if we ask for normalization and (ω 3 ) to hold. Note that any weight function may be substituted by a normalized equivalent one (e.g. see [3, Remark 1.2 (b)]) and equivalence preserves the property (ω 3 ), so it is no restriction to ask for normalization from the very beginning. 1. An important difference to the complete approach in [14] is, see also the comments given in the introduction in Section 7 there, that condition γ(ω) > 0 and which amounts to (ω 1 ) as shown in [14, Lemma 4.2] will not be valid in general anymore in the mixed situation. In the following we are only requiring γ(σ, ω) > 0 and recall that γ(σ, ω) ≥ γ(ω) as shown above. An explicit example of this situation, having γ(σ, ω) > 0 (as large as desired) and γ(ω) = γ(σ) = 0 will be provided in the Appendix A below. We are able to treat this situation by recognizing that in [14] we have worked in a very general framework for weight functions and the assumption γ(ω) > 0 can be replaced by γ(σ, ω) > 0 without causing problems. Recall that (ω 1 ) is standard in the ultradifferentiable setting and thus our techniques make it possible to treat "exotic" weight function situations as well. Moreover (ω 1 ) has also been used to have that the class defined by ω admits a representation by using the associated weight matrix Ω, see Section 2.4. Thus the warranty that the ultraholomorphic (and also the ultradifferentiable) spaces associated with ω and its corresponding weight matrix Ω coincide is not clear anymore, see the comments preceding (2.9). Therefore the main and most general ultraholomorphic extension result Theorem 5.7 deals with a mixed situation between classes defined by (associated) weight matrices. If one imposes (ω 1 ) on the weights one is able to prove a mixed version of classes defined by weight functions, see Corollary 5.10. Finally, in Theorem 5.12 we will treat the mixed weight sequence case as well. The function e a enjoys the following properties: (i) z −1 e a (z) is uniformly integrable at the origin, it is to say, for any t 0 > 0 we have |e a (te iτ )|dt < ∞. (ii) There exist constants K > 0, independent from a, and C > 0, depending on a, such that (iii) For ξ ∈ R, ξ > 0, the values of e a (ξ) are positive real. Proof. The proof is completely the same as for [14, Lemma 7.1], for (i) we apply the right-hand side in (4.5), for (ii) we use (4.6) and (2.8) together with the definition given in (2.5). Analogously as in [14,Definition 7.2] we introduce now the moment function associated with e a . Definition 5.4. We define the moment function associated with the function e a (introduced in the previous Lemma) as From Lemma 5.3 and the definition of h W x in (2.5) we see that for every p ∈ N, So, we easily deduce that the function m a is well defined and continuous in {λ : ℜ(λ) ≥ 0}, and holomorphic in {λ : ℜ(λ) > 0}. Moreover, m a (ξ) is positive for every ξ ≥ 0, and the sequence (m a (p)) p∈N is called the sequence of moments of e a . The next result is generalizing [14,Proposition 7.3], which is similar to Proposition 3.6 in [18], to a mixed setting. Proposition 5.5. Let σ and ω be normalized weight functions with γ(σ, ω) > 0 and such that both weights satisfy (ω 3 ). Let Σ = {S x : x > 0} and Ω = {W x : x > 0} be the weight matrices associated with σ and ω respectively, and for 0 < γ < γ(σ, ω) and a > 0 let G a , e a , m a be the functions previously constructed. Then, there exist constants C 1 , C 2 > 0, both depending on a, such that for every p ∈ N one has where K 2 and K 3 are the constants, not depending on a, appearing in (4.5). Proof. The proof follows the lines as in [14,Proposition 7.3] (based on the arguments by O. Blasco in [2]). For the second estimate in (5.2) we use the second inequality in (4.5) (and here also (2.2) is used); the first estimate in (5.2) follows by applying the first inequality in (4.5). 5.6. Main extension results. Now we are able to formulate and proof the generalization of the main extension result [14,Theorem 7.4]. Theorem 5.7. Let σ and ω be normalized weight functions with γ(σ, ω) > 0 and such that both weights satisfy (ω 3 ) and 0 < γ < γ(σ, ω). Moreover we denote by Σ = {S x : x > 0} and Ω = {W x : x > 0} the weight matrices associated with σ and ω respectively and consider the matrices where S x := (p!S x p ) p∈N and W x := (p!W x p ) p∈N . Then, there exists a constant k 0 > 0 such that for every x > 0 and every h > 0, one can construct a linear and continuous map which allows us to write the preceding difference as Then, we have From (5.4) we deduce that where in the last step we have used that 0 < u < R 0 = K 2 /(4h) we have 1 − 2hu/K 2 > 1/2. In order to estimate f 2 (z), observe that for u ≥ R 0 and 0 ≤ p ≤ N −1 we always have u p ≤ R p 0 u N /R N 0 , and so, using again (5.4) and the value of R 0 , we may write Then, we deduce that In order to conclude, it suffices then to obtain estimates for ∞ 0 |e a (u/z)|u N −1 du. For this, note first that, by the estimates in (4.5), Now, we can follow the first part of the proof of [14,Proposition 7.3] to obtain that Gathering (5.5), (5.6), (5.7) and (5.8), we get A straightforward application of Cauchy's integral formula yields that there exists a constant r, depending only on γ and δ, such that whenever z is restricted to belong to S γ , one has that for every p ∈ N, So, putting k 0 := 4K3r K2 (independent from x and h), we see that f λ ∈ A W 8x ,k0h (S γ ) and f λ W 8x ,k0h ≤ 2C2 C1 |λ| S x ,h . Since the map sending λ to f λ is clearly linear, this last inequality implies that the map is also continuous from Λ S x ,h into A W 8x ,k0h (S γ ). Finally, from (5.9) one may easily deduce that B(f λ ) = λ, and we conclude. Corollary 5.10. Let σ, ω ∈ W be given, so that γ(σ, ω) ≥ γ(ω) > 0, and let 0 < γ < γ(σ, ω) and Σ = {S x : x > 0} and Ω = {W x : x > 0} be the weight matrices associated with σ and ω respectively and consider the matrices Let τ 1 ∈ W and τ 2 ∈ W be the weight functions coming from Theorem 5.9 applied to σ and ω respectively, so Then, for every l > 0 there exists l 1 > 0 such that there exists a linear and continuous map such that for all λ ∈ Λ τ1,l one has B • E τ1,τ2 l (λ) = B(f λ ) = λ. Thus we have shown that Proof. Let T i := {T i,x : x > 0} be the weight matrix associated with the weight function τ i , i.e. T i,x p := exp 1 x ϕ * τi (xp) for each x > 0 and p ∈ N, i = 1, 2. We may apply (2.9) in order to deduce that and, as it has already been remarked in [14,Corollary 7.6], we get T 1 {≈} Σ, T 2 {≈} Ω. For the rest of the proof we follow [14,Corollary 7.6] and use for the extension Theorem 5.7. 6. Mixed extension results with only one fixed weight 6.1. Extension results where the weight sequence/function defining the function space is fixed. Using the properties of the index µ(N ) and the construction of the ramified descendant of Section 3.10 we can now prove the following variant of Theorem 5.12. Theorem 6.2. Let N ∈ LC be given with µ(N ) > 0 and let 0 < r < µ(N ). Assume that (3.6) holds true for N 1/r . Then there does exist L ∈ LC having (mg) such that for each 0 < γ < r we get: There exists a constant k 1 > 0 such that for every h > 0, one can construct a linear and continuous map The sequence L is maximal among those M ∈ LC satisfying µ k ≤ Cν k and (M, N ) γr . The important difference between Theorem 5.12 and this result is that, of course, L is depending here on given r. Proof. Let 0 < γ < r < µ(N ) be given according to the requirements above. Then we consider the sequence L N,r defined via the descendant S N,r in (3.5), see Section 3.10. As seen there we have that (L N,r , N ) γr holds true and which proves γ(L N,r , N ) ≥ r > γ. Moreover λ N,r k ≤ Cν k and since N 1/r has (3.6), Lemma 3.15 yields (mg) for S N,r and so for L N,r , too. Thus we can apply Theorem 5.12 to M ≡ L N,r and N and γ unchanged to obtain: There exists a constant k 1 > 0 such that for every h > 0, one can construct a linear and continuous map p ) p∈N and so (6.1) follows by taking L ≡ L N,r . Remark 6.3. Let N ∈ LC be given with µ(N ) > 0. If N has in addition (mg), then each S N,r and L N,r , 0 < r < µ(N ), share this property, see (iv) in Remark 3.14. Using µ(ω) we can prove Theorem 6.2 for the weight function setting, so we have the following variant of Theorem 5.7. Theorem 6.4. Let ω be a normalized weight function with (ω 3 ) and µ(ω) > 0. Then for all 0 < r < µ(ω) there does exist a normalized weight function σ satisfying (ω 3 ) such that for each 0 < γ < r we get: There exists a constant k 0 > 0 such that for every x > 0 and every h > 0, one can construct a linear and continuous map Thus we have shown that B(A { Ω} (S γ ) ⊇ Λ { Σ} (by using for Σ and Ω the same notation as in Theorem 5.7). The function σ is chosen minimal among those normalized weight functions τ satisfying (ω 3 ), τ ω (i.e. ω(t) = O(τ (t))) and enjoying (τ, ω) γr . Proof. According to this value r > 0 given, we consider the weight κ 1/r ω r (see (3.12)) and so (κ 1/r ω r , ω) γr is valid. This weight enjoys all properties like ω except normalization (by definition). But normalization can be achieved w.l.o.g. by switching to an equivalent weight (redefining κ 1/r ω r near 0, e.g. see [3, Remark 1.2 (b)]) and which will be denoted by σ. Thus γ(σ, ω) ≥ r > γ and we can apply Theorem 5.7 to these weights σ and ω and the value γ and conclude. Remark 6.5. Due to (3.13) one could try to restate Theorem 6.4 by applying Theorem 6.2 to N ≡ W x , x > 0 arbitrary. However, once chosen γ < µ(ω) = µ(W x ) in Theorem 6.4 we obtain an extension for another weight function σ such that moving the index x we are staying in the same weight matrix associated with σ by the precise choice x → 8x. So here we can take some uniform choice for all sequences in Ω (by obtaining a weight matrix not depending on given x) and which is not following by applying Theorem 6.2. Remark 6.6. Naturally one might ask what happens in the dual situation, that is, fixing the weight sequence or weight function that controls the derivatives at the origin. However, in this case the inverse construction concerning the descendant, called the predecessor, see [24,Remark 4.3], does not provide any new information, since the bounds for the opening are the same as those that are known for the one level extension theorem. (i) and (ii) together tell us that M is lying between two Gevrey sequences. By proving (i) one can verify that µ(M ) = lim inf p→∞ log(µp) log(p) = γ holds true. A slight variation of Lemma A.1 yields the following. Lemma A.2. Let γ > 1 be given, then there exists a sequence M ∈ LC such that (i) M does satisfy (nq), more precisely µ k ≥ k γ for all k ∈ N and so even (nq γ−ε ) holds true for any ε > 0 (small), (ii) µ k ≤ k 2γ 2 for all k ∈ N, (iii) M does not satisfy (β 3 ) or equivalently M = (p!M p ) p∈N does not satisfy (β 1 ) (and consequently M is not strongly nonquasianalytic too), (iv) M does not satisfy (mg). In fact any choice β > 2γ and α := β − 1 would be working for the following proof. Again by construction µ(M ) = lim inf k→∞ log(µ k ) log(k) = γ holds true. Using Lemma A.1 we can now underline the importance of Theorem 5.7 and in particular of Theorem 5.12 as follows. Theorem A.3. There do exist sequences M and N satisfying all requirements from Theorem 5.12 but such that γ(M ) = γ(N ) = γ(ω M ) = γ(ω N ) = 0. Moreover we can achieve γ(M, N ) to be as large as desired. But γ(M ) = γ(N ) = 0 holds true: (β 1 ) or equivalently (γ 1 ) is violated for both sequences M = (p!M p ) p and N = (p!N p ) p (by property (iii)), and so γ(M ) = γ(N ) = 0. And this is equivalent to having γ(ω M ) = γ(ω N ) = 0, because both sequences have (mg), for a proof see [10,Section 4]. In particular we have seen that neither M ∈ SR nor N ∈ SR and by the characterizations shown in [21], not any to M or N equivalent sequence L can belong to class SR. Let M and N denote the sequences from Theorem A.3 above with parameters γ ′ > 1 and γ > 1 subject to (A.1). Then by applying Theorem 5.12 for any given 0 < δ < γ(M, N ) there is k 1 > 0 such that for every h > 0 there exists a continuous linear extension map This kind of extension result is not covered by the theory developed by the authors in [14]. More precisely [14,Theorem 7.4] fails since γ(ω M ) = γ(ω N ) = 0 and also the mixed setting from [14, Section 7.1] cannot be applied, neither to M nor to N directly. Note that both M and N have (mg), thus both matrices associated with ω M and ω N are constant, see (iii) in Lemma 2.7 and Remark 2.5. Now let M and N be the sequences constructed in Lemma A.2 with parameters γ ′ and γ respectively and here we require that Again it is straightforward to check that (M, N ) γr holds true for all 0 < r < γ and which implies γ(M, N ) ≥ γ > 0. Since µ(N ) = γ we again have γ(M, N ) = γ and by having µ p ≤ ν p , Lemma 3.7 yields γ(ω M , ω N ) ≥ γ(M, N ) = γ. But here neither M nor N does satisfy (mg) and we cannot apply Theorem 5.12 directly. But Theorem 5.7 applied to σ ≡ ω M and ω ≡ ω N with Σ denoting the matrix associated with ω M and Ω the matrix associated with ω N , yields now the following extension result: For any given 0 < δ < γ(ω M , ω N ) there exists a constant k 0 > 0 such that for every x > 0 and every h > 0, one can construct a linear and continuous extension map Hence we have shown B(A { Ω} (S γ )) ⊇ Λ { Σ} . As mentioned in the introduction and in Remark 5.1 we have that starting directly with a Braun-Meise-Taylor weight function ω with γ(ω) = 0 we do not have (ω 1 ) (as shown in [10, Corollary 2.14]). Hence a basic assumption in the whole theory of ultradifferentiable functions defined in terms of ω, is violated from the very beginning.
12,275.2
2019-08-16T00:00:00.000
[ "Mathematics" ]
Accelerating adaptive inverse distance weighting interpolation algorithm on a graphics processing unit This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points’ spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available. Introduction A spatial interpolation algorithm is the method in which the attributes at some known locations (data points) are used to predict the attributes at some unknown locations (interpolated points). Spatial interpolation algorithms, such as the inverse distance weighting (IDW) [1], Kriging [2] and discrete smooth interpolation (DSI) [3,4] related research fields, especially in a geographic information system (GIS) [5]; see a brief summary in [6] and a comparative survey in [7]. Among the above-mentioned three spatial interpolation algorithms, only the Kriging method is computationally intensive due to the inversion of the coefficient matrix, while the other two are easy to compute. However, when the above three algorithms are applied to a large set of points, for example, more than 1 million points, they are still quite computationally expensive, even for the simplest interpolation algorithm IDW. To be able to apply those interpolation algorithms in large-scale applications, the computational efficiency needs to be improved. With the rapid development of multicore central processing unit (CPU) and multicore graphics processing unit (GPU) hardware architecture, parallel computing technology has made remarkable progress. One of the most effective and commonly used strategies for enhancing the computational efficiency of interpolation algorithms is to parallelize the interpolating procedure under various massively parallel computing environments on multicore CPU and/or GPU platforms. For example, by taking advantage of the power of traditional CPU-based parallel programming models, Armstrong & Marciano [8,9] implemented the IDW interpolation algorithm in parallel using Fortran 77 on shared-memory parallel supercomputers, and achieved an efficiency close to 0.9. Guan & Wu [10] performed their parallel IDW algorithms using open multi-processing (OpenMP) running on an Intel Xeon 5310, achieving an excellent efficiency of 0.92. Huang et al. [24] designed a parallel IDW interpolation algorithm with the message passing interface (MPI) by incorporating the message passing interface, multiple data (SPMD) and master/slave (M/S) programming modes, and attained a speed-up factor of almost 6 and an efficiency greater than 0.93 under a Linux cluster linked with six independent PCs. Li et al. [25] developed the parallel version of the IDW interpolation using the Java Virtual Machine (JVM) for the multi-threading functionality, and then applied it to predict the distribution of daily fine particulate matter PM 2.5. As general purpose computing on modern GPUs can significantly reduce computational times by performing massively parallel computing, current research efforts are being devoted to parallel IDW algorithms on GPU computing architectures such as Compute Unified Device Architecture (CUDA) [26] and Open Computing Language (OpenCL) [27]. For example, Huraj et al. [28,29] have deployed IDW on GPUs to accelerate snow cover depth prediction. Henneböhl et al. [14] studied the behaviour of IDW on a single GPU depending on the number of data values, the number of prediction locations, and different ratios of data size and prediction locations. Hanzer [30] implemented the standard IDW algorithm using Thrust, PGI Accelerator and OpenCL. Xia et al. [31,32] developed the GPU implementations of an optimized IDW algorithm proposed by them, and obtained 13-33-fold speed-ups in computation time over the sequential version. And quite recently, Mei [33] developed two GPU implementations of the IDW interpolation algorithm, the tiled version and the CUDA Dynamic Parallelism (CDP) version, by taking advantage of shared memory and CUDA Dynamic Parallelism, and found that the tiled version has speed-ups of 120 and 670 over the CPU version when the power parameter p was set to 2 and 3.0, respectively, but the CDP version is 4.8-6.0 times slower than the naive GPU version. In addition, Mei & Tian [34] compared and analysed the impact of data layouts on the efficiency of GPU-accelerated IDW implementations. The power of GPU-based parallelization is also used in other geospatial analysis such as the viewshed analysis. For example, Xia et al. [31] proposed a GPU-based framework for geospatial analysis and found that the GPU implementations can lead to dataset-dependent speed-ups in the range of 28-925-fold for viewshed analysis. Strnad [35] presented the GPU-based parallel implementation of visibility calculation from multiple viewpoints on raster terrain grids. Fang et al. [36] presented a real-time algorithm for viewshed analysis in three-dimensional scenes by using the parallel computing capabilities of a GPU. Zhao et al. [37] proposed a parallel computing approach to viewshed analysis of large terrain data using GPUs. In addition, Stojanovic & Stojanovic [38,39] developed a parallel implementation of map-matching and viewshed analysis using CUDA that was performed on contemporary GPUs. Osterman et al. [40] presented an IO-efficient parallel implementation of an R2 viewshed algorithm for large terrain maps on a CUDA GPU. analysis algorithm to a parallel context to increase the speed at which a viewshed can be rendered. Wang et al. [42] presented a real-time algorithm for viewshed analysis in 3D Digital Earth system (GeoBeans3D) using the parallel computing of GPUs. The Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm [43] is an improved version of the standard IDW. The standard IDW is relatively fast and easy to compute, and straightforward to interpret. However, in the standard IDW the distance-decay parameter is applied uniformly throughout the entire study area without considering the distribution of data within it, which leads to less accurate predictions when compared to other interpolation methods such as Kriging [43]. In the AIDW, the distance-decay parameter is a no longer constant value over the entire interpolation space, but can be adaptively calculated using a function derived from the point pattern of the neighbourhood. The AIDW performs better than the constant parameter method in most cases, and better than ordinary Kriging in the cases when the spatial structure in the data could not be modelled effectively by typical variogram functions. In short, the standard IDW is a logical alternative to Kriging, but AIDW offers a better alternative. As stated above, when exploited in large-scale applications, the standard IDW is in general computationally expensive. As an improved and complicated version of the standard IDW, the AIDW in this case will be also computationally expensive. To the best of the authors' knowledge, however, there is currently no existing literature reporting the development of parallel AIDW algorithms on the GPU. In this paper, we introduce our efforts dedicated to designing and implementing the parallel AIDW interpolation algorithm [43] on a single modern graphics processing unit (GPU). We first present a straightforward but suitable-for-paralleling method for finding the nearest points. We then develop two versions of the GPU implementations, i.e. the naive version that does not take advantage of the shared memory and the tiled version that profits from the shared memory. We also implement both the naive version and the tiled version using two data layouts to compare the efficiency. We observe that our GPU implementations can achieve satisfactory speed-ups over the corresponding CPU implementation for varied sizes of testing data. Our contributions in this work can be summarized as follows: (i) we design the parallel AIDW interpolation algorithm by using the GPU and (ii) we develop practical GPU implementations of the parallel AIDW algorithm. The rest of this paper is organized as follows. Section 2 gives a brief introduction to the AIDW interpolation. Section 3 introduces considerations and strategies for accelerating the AIDW interpolation and details of the GPU implementations. Section 4 presents some experimental tests that are performed on single and/or double precision. Section 5 discusses the experimental results. Finally, §6 draws some conclusions. 2. Background: inverse distance weighting and adaptive inverse distance weighting interpolation algorithm The standard inverse distance weighting interpolation algorithm The IDW algorithm is one of the most commonly used spatial interpolation methods in Geosciences, which calculates the prediction values of unknown points (interpolated points) by weighting the average of the values of known points (data points). The name given to this type of methods was motivated by the weighted average applied because it resorts to the inverse of the distance to each known point when calculating the weights. The difference between different forms of IDW interpolation is that they calculate the weights variously. A general form of predicting an interpolated value Z at a given point x based on samples Z i = Z(x i ) for i = 1, 2, . . . , n using IDW is an interpolating function: (2.1) The above equation is a simple IDW weighting function, as defined by Shepard [1], where x denotes a prediction location, x i is a data point, d is the distance from the known data point x i to the unknown interpolated point x, n is the total number of data points used in interpolating, and p is an arbitrary positive real number called the power parameter or the distance-decay parameter (typically, α = 2 in the standard IDW). Note that, in the standard IDW, the power/distance-decay parameter α is a userspecified constant value for all unknown interpolated points. The adaptive inverse distance weighting interpolation algorithm The AIDW is an improved version of the standard IDW, which was developed by Lu & Wong [43]. The basic and most important idea behind the AIDW is that: it adaptively determines the distance-decay parameter α according to the spatial pattern of data points in the neighbourhood of the interpolated points. In other words, the distance-decay parameter α is no longer a pre-specified constant value but adaptively adjusted for a specific unknown interpolated point according to the distribution of the data points/sampled locations. When predicting the desired values for the interpolated points using AIDW, there are typically two phases: the first one is to adaptively determine the parameter α according to the spatial pattern of data points; and the second is to perform the weighting average of the values of data points. The second phase is the same as that in the standard IDW; see equation (2.1). In AIDW, for each interpolated point, the adaptive determination of the parameter α can be carried out in the following steps. Step 1. Determine the spatial pattern by comparing the observed average nearest-neighbour distance with the expected nearest-neighbour distance. (i) Calculate the expected nearest-neighbour distance r exp for a random pattern using where n is the number of points in the study area and A is the area of the study region. (ii) Calculate the observed average nearest-neighbour distance r obs by taking the average of the nearest-neighbour distances for all points: where k is the number of nearest-neighbour points and d i is the nearest-neighbour distances. The k can be specified before interpolating. (iii) Obtain the nearest-neighbour statistic R(S 0 ) by R(S 0 ) = r obs r exp , (2.4) where S 0 is the location of an unknown interpolated point. Step 2. Normalize the R(S 0 ) measure to μ R such that μ R is bounded by 0 and 1 by a fuzzy membership function: where R min or R max refers to a local nearest-neighbour statistic value (in general, the R min and R max can be set to 0.0 and 2.0, respectively). Step 3. Determine the distance-decay parameter α by mapping the μ R value to a range of α by a triangular membership function that belongs to certain levels or categories of distance-decay value as follows: where the α 1 , α 2 , α 3 , α 4 , α 5 are assigned to be five levels or categories of distance-decay value. After adaptively determining the parameter α, the desired prediction value for each interpolated point can be obtained via the weighting average. This phase is the same as that in the standard IDW; see equation (2.1). 3. Graphics processing unit-accelerated adaptive inverse distance weighting interpolation algorithm 3.1. Strategies and considerations for graphics processing unit acceleration 3.1. Overall considerations The AIDW algorithm is inherently suitable to be parallelized on GPU architecture. This is because, in AIDW, the desired prediction value for each interpolated point can be calculated independently, which means that it is natural to calculate the prediction values for many interpolated points concurrently without any data dependencies between the interpolating procedures for any pair of the interpolated points. Owing to the inherent feature of the AIDW interpolation algorithm, it is allowed a single thread to calculate the interpolation value for an interpolated point. For example, assuming there are n interpolation points that are needed to be predicted their values such as elevations, and then it is needed to allocate n threads to concurrently calculate the desired prediction values for all those n interpolated points. Therefore, the AIDW method is quite suitable to be parallelized on the GPU architecture. In GPU computing, shared memory is expected to be much faster than global memory; thus, any opportunity to replace global memory access by shared memory access should therefore be exploited [7]. A common optimization strategy is called 'tiling', which partitions the data stored in the global memory into subsets called tiles so that each tile fits into the shared memory [15]. This optimization strategy of 'tiling' is also adopted to accelerate the AIDW interpolation algorithm: the coordinates of data points are first transferred from the global memory to the shared memory; then each thread within a thread block can access the coordinates stored in the shared memory concurrently. As the shared memory residing in the GPU is limited per stream multiprocessor, the data in the global memory, that is, the coordinates of data points, need to be first split/tiled into small pieces and then transferred to the shared memory. By employing the 'tiling' strategy, the global memory accesses can be significantly reduced; and thus the overall computational efficiency is expected to be improved. Method for finding the nearest data points The essential difference between the AIDW algorithm and the standard IDW algorithm is that: in the standard IDW, the parameter power α is specified to a constant value (e.g. 2 or 3.0) for all the interpolation points, while, in contrast, in the AIDW the power α is adaptively determined according to the distribution of the interpolated points and data points. In short, in IDW the power α is user-specified and constant before interpolating; but in AIDW the power α is no longer user-specified or constant but adaptively determined in the interpolation. The main steps of the adaptive determination of the power α in the AIDW have been listed in §2.2. Among these steps, the most computationally intensive step is to find the k nearest neighbours (kNN) for each interpolated point. Several effective kNN algorithms have been developed by region partitioning using various data structures [21,[44][45][46]. However, these algorithms are computationally complex in practice, and are not suitable to be used in implementing AIDW. This is because, in AIDW, the kNN search has to be executed within a single CUDA thread rather than in a thread block or grid. In this paper, we present a straightforward but suitable for the GPU parallelized algorithm to find the k nearest data points for each interpolated point. Assuming there are n interpolated points and m data points, for each interpolated point we carry out the following steps: Step 1. Calculate the first k distances between the first k data points and the interpolated points; for example, if the k is set to 10, then there are 10 distances that are needed to be calculated; see the row (a) in figure 1. Step 2. Sort the first k distances in ascending order; see row (b) in figure 1. Step 3. For each of the rest (m − k) data points, (ii) compare the dist with the kth distance: if dist < the kth distance, then replace the kth distance with the dist (see row (c)); (iii) iteratively compare and swap the neighbouring two distances from the kth distance to the 1st distance until all the k distances are newly sorted in ascending order; see rows (c)-(g) in figure 1. The use of different data layouts Data layout is the form in which data should be organized and accessed in memory when operating on multivalued data such as sets of three-dimensional points. The selecting of an appropriate data layout is a crucial issue in the development of GPU-accelerated applications. The efficiency performance of the same GPU application may drastically differ due to the use of different types of data layouts. Typically, there are two major choices of the data layout: the array of structures (AoS) and the structure of arrays (SoA) [47]; another type of data layout, array of aligned structures (AoaS) [34], can be very easily generated by adding the forced alignment based on the layout AoS. In fact, the data layout AoaS can be considered as an improved variant of the layout AoS; see these three layouts, i.e. SoA, AoS and AoaS in figure 2. Organizing data in the AoS layout leads to coalescing issues as the data are interleaved. By contrast, the organizing of data according to the SoA layout can generally make full use of the memory bandwidth due to no data interleaving [47]. In addition, global memory accesses based upon the SoA layout are always coalesced. In practice, it is not always obvious which data layout will achieve better performance for a specific GPU application. A common solution is to implement a specific application using different data layouts separately and then compare the performances. As mentioned earlier, the data layout AoaS can be considered as an improved variant of the layout AoS; and it has been reported that the data layout AoaS can achieve better efficiency than that by the layout AoS [34]. In this work, we will evaluate the performance impact of the two data layouts, SoA and AoaS. Implementation details This section will present the details on implementing the GPU-accelerated AIDW interpolation algorithm. We have developed two versions: (i) the naive version that does not take advantage of the shared memory and (ii) the tiled version that exploits the use of the shared memory. And for both of the above two versions, two implementations are separately developed according to the two data layouts SoA and AoaS. All the source code of the presented parallel AIDW algorithm is publicly available [48]. Naive version In this naive version, only registers and global memory are used without profiting from the use of the shared memory. The input data and the output data, i.e. the coordinates of the data points and the interpolated points, are stored in the global memory. Assuming that there are m data points used to evaluate the interpolated values for n prediction points, we allocate n threads to perform the parallelization. In other words, each thread within a grid is responsible for predicting the desired interpolation value of one interpolated point. A complete CUDA kernel is listed in figure 3. The coordinates of all data points and prediction points are stored in the arrays REAL dx The word REAL is defined as float and double on single and double precision, respectively. Within each thread, we first find the k nearest data points to calculate the r obs (see equation (2.3)) according to the straightforward approach introduced in §3.1.2 Method for finding the nearest data points; see the piece of code from line 11 to line 34 in figure 3; then we compute the r exp and R(S 0 ) according to equations (2.2) and (2.4). After that, we normalize the R(S 0 ) measure to μ R such that μ R is bounded by 0 and 1 by a fuzzy membership function; see equation (2.5) and the code from line 38 to line 40 in figure 3. Finally, we determine the distance-decay parameter α by mapping the μ R values to a range of α by a triangular membership function; see equation (2.6) and the code from line 42 to line 49. After adaptively determining the power parameter α, we calculate the distances to all the data points again; and then according to the distances and the determined power parameter α, all the m weights are obtained; finally, the desired interpolation value is achieved via the weighting average. This phase of calculating the weighting average is the same as that in the standard IDW method. Note that, in the naive version, it is needed to compute the distances from all data points to each prediction point twice. The first time this is carried out to find the k nearest neighbours/data points; see the code from line 11 to line 32; and the second is to calculate the distance-inverse weights; see the code from line 52 to line 57. Tiled version The workflow of this tiled version is the same as that of the naive version. The major difference between the two versions is that, in this version, the shared memory is exploited to improve the computational efficiency. The basic ideas behind this tiled version are as follows. The CUDA kernel presented in figure 3 is a straightforward implementation of the AIDW algorithm that does not take advantage of the shared memory. Each thread needs to read the coordinates of all data points from global memory. Thus, the coordinates of all data points are needed to be read n times, where n is the number of interpolated points. In GPU computing, a quite commonly used optimization strategy is the 'tiling', which partitions the data stored in the global memory into subsets called tiles so that each tile fits into the shared memory [15]. This optimization strategy 'tiling' is adopted to accelerate the AIDW interpolation: the coordinates of data points are first transferred from the global memory to the shared memory; then each thread within a thread block can access the coordinates stored in shared memory concurrently. In the tiled version, the tile size is directly set the same as the block size (i.e. the number of threads per block). Each thread within a thread block takes the responsibility of loading the coordinates of one data point from the global memory to the shared memory and then computing the distances and 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 inverse weights for those data points stored in the current shared memory. After all threads within a block have finished computing these partial distances and weights, the next piece of data in the global memory is loaded into the shared memory and used to calculate the current wave of partial distances and weights. It should be noted that: in the tiled version, it is needed to compute the distances from all data points to each prediction point twice. The first time this is carried out to find the k nearest neighbours/data points; and the second time is to calculate the distance-inverse weights. In this tiled version, both of the above two waves of calculating distances are optimized by employing the 'tiling' strategy. By employing the 'tiling' strategy and exploiting the shared memory, the global memory access can be significantly reduced since the coordinates of all data points are only read (n/threadsPerBlock) times rather than n times from the global memory, where n is the number of prediction points and threadsPerBlock denotes the number of threads per block. Furthermore, as stated above, the 'tiling' strategy is applied twice. After calculating each wave of partial distances and weights, each thread accumulates the results of all partial weights and all weighted values into two registers. Finally, the prediction value of each interpolated point can be obtained according to the sums of all partial weights and weighted values and then written into the global memory. Results To evaluate the performance of the GPU-accelerated AIDW method, we have carried out five groups of experimental tests on three different platforms, including one personal laptop equipped with a GeForce GT730M GPU and two workstations equipped with a Quadro M5000 GPU and a Tesla K40c GPU, respectively. All the experimental tests are run on OS Windows 7 Professional (64-bit), Visual Studio 2010 and CUDA v. 7.0. More specifications of the adopted three platforms for carrying out the experimental tests are listed in table 1. Two versions of the GPU-accelerated AIDW, i.e. the naive version and the tiled version, are implemented with the use of both the data layouts SoA and AoaS. These GPU implementations are evaluated on both single precision and double precision. However, the CPU version of the AIDW implementation is only tested on double precision; and all results of this CPU version are employed as the baseline results for comparing computational efficiency. All the data points and interpolated/prediction points are randomly created within a square. The numbers of the prediction points and the data points are equivalent. We use the following five groups of data size, i.e. 10, 50, 100, 500 and 1000 K, where one K represents the number 1024 (1 K = 1024). For the GPU implementations, the recorded execution time includes the cost spent on transferring the input data point from the host to the device and transferring the output data from the device back to the host; but it does not include time consumed in creating the test data. Similarly, for the CPU implementation, the time spent for generating test data is also not considered. testing results, we have observed that: (i) The speed-up is about 100-400; and the highest speed-up is up to 400, which is achieved by the tiled version with the use of the data layout SoA. (ii) The tiled version is about 1.45 times faster than the naive version. (iii) The data layout SoA is slightly faster than the layout AoaS. In the experimental test, when the number of the data points and interpolation points is about 1 million (1000 K = 1 024 000), the execution time of the CPU version is more than 18 h, while in contrast the tiled version only needs less than 3 min. Thus, to be used in practical applications, the tiled version of the GPU-accelerated AIDW method on single precision is strongly recommended. Experiments on double precision We also evaluate the computational efficiency of the naive version and the tiled version on double precision (table 3). It is widely known that the arithmetic operated on the GPU architecture on double precision is inherently much slower than that on single precision. In our experimental tests, we also clearly observed this behaviour: on double precision, the speed-up of the GPU version over the CPU version is only about 8 (figure 4b), which is much lower than that achieved on single precision. We have also observed that: (i) there are no performance gains obtained from the tiled version against the naive version and (ii) the use of data layouts, i.e. SoA and AoaS, does not lead to significant differences in computational efficiency. As observed in our experimental tests, on double precision the speed-up generated in most cases is approximately 8, which means the GPU implementations of the AIDW method are far from practical usage. Thus, we strongly recommend users to prefer the GPU implementations on single precision for practical applications. Experiments on single precision On the PC equipped with a Quadro M5000 GPU, the execution time of the CPU and GPU implementations of the AIDW on single precision is listed in table 4. The speed-ups of the GPU implementations over the baseline CPU implementation are illustrated in figure 5a. According to these testing results, we have observed that: Experiments on double precision We also evaluate the computational efficiency of the naive version and the tiled version on double precision (table 5). In our experimental tests, we have clearly observed that, on double precision, the speed-up of the GPU version over the CPU version is about 20-35 (figure 5b), which is much lower than that achieved on single precision. As is the case with the GPU GT730m, we have also observed that: (i) there are no performance gains obtained from the tiled version against the naive version and (ii) the use of data layouts, i.e. SoA and AoaS, does not lead to significant differences in computational efficiency. Experiments on double precision We also evaluate the computational efficiency of the naive version and the tiled version on double precision (table 7). The speed-up of the GPU version over the CPU version is about 100-200 (figure 6b), which is much lower than that achieved on single precision. As with the GPUs GT730M and M5000, we have also observed that: (i) there are no performance gains obtained from the tiled version against the naive version and (ii) the use of data layouts, i.e. SoA and AoaS, does not lead to significant differences in computational efficiency. Impact of data layout on the computational efficiency In this work, we have implemented the naive version and the tiled version with the use of two data layouts SoA and AoaS. In our experimental tests, we have found that, on single precision, the SoA data layout can achieve a slightly better efficiency than the AoaS on the GPU GT730M, while the data layout AoaS is slightly better than the SoA on both GPUs M5000 and K40c. However, there is no significant difference in the efficiency when using the above two data layouts on the adopted three GPUs ( figure 7). Each type of data layout has its theoretical advantages. Theoretically, organizing data in the SoA layout can generally make full use of the memory bandwidth due to no data interleaving [47]. In addition, global memory accesses based upon the SoA layout are always coalesced. By contrast, when employing the AoaS layout, the data structure for representing multivalued data such as a set of three-dimensional points is forced to be aligned (figure 2). Operations using the aligned structure would require much fewer memory transactions when accessing global memory, and thus the overall performance efficiency is increased [26]. In practice, the impact of data layouts on computational efficiency strongly depends on: (i) the particular problem that needs to be solved and (ii) the GPUs used to deal with the target problem. In this work, the experimental tests indicate that: on single precision, the SoA data layout can achieve slightly better efficiency than the AoaS on the GPU GT730M, while the data layout AoaS is slightly better than the SoA on both the GPUs M5000 and K40c. The reason for the above behaviour is probably that: for the same specific application, the impact of data layouts on computational efficiency may differ on different GPUs. Although the two data layouts lead to different impacts on computational efficiency, there is no significant difference in the efficiency when using the above two data layouts on the adopted three GPUs ( figure 7). This is probably because the AIDW interpolation algorithm is quite suitable to be parallelized on the GPU, and the parallelization has been well implemented. Therefore, the impact of different data layouts on efficiency for a specific highly parallelized application would not be significant. In summary, the impact of data layouts on computational efficiency may differ on different GPUs. It also should be noted that, it is not always obvious which data layout will achieve better performance for a specific application. Performance comparison of the naive version and tiled version In our experimental tests, we also observed that the tiled version is always a little faster than the naive version no matter which data layout is adopted; see figure 8. This performance gain is due to the use of the shared memory according to the optimization strategy of 'tiling'. On the GPU architecture, the shared memory is inherently much faster than the global memory; thus any opportunity to replace global memory access by shared memory access should therefore be exploited. In the tiled version, the coordinates of data points originally stored in the global memory are divided into small pieces/tiles that fit the size of shared memory, and then loaded from slow global memory to fast shared memory. These coordinates stored in the shared memory can be accessed quite fast by all threads within a thread block when calculating the distances. By blocking the computation in this way, we take advantage of fast shared memory and significantly reduce global memory accesses: the coordinates of data points are only read (n/threadsPerBlock) times from the global memory, where n is the number of prediction points. This is the reason why the tiled version is faster than the naive version. Therefore, from the perspective of practical usage, we recommend the users to adopt the tiled version of the GPU implementations. Performance comparison on single precision and double precision Unlike the computations on the CPU, the computational efficiency on the GPU architecture is significantly varied on different precisions (e.g. on single precision and double precision). More specifically, computations on single precision are dramatically faster than those on double precision. This behaviour is inherent on GPUs. One of the key reasons for this has been revealed in the CUDA Programming Guide [26]. The processing power of the GPU is highly determined by the number of the residing warps on each multiprocessor for a given kernel; and the number of registers used by a kernel can have a significant impact on the number of resident warps. Each double variable uses two registers, while each float variable only needs one register. In this case, for a specific kernel, typically the number of registers used by the kernel on double precision is much larger than that on single precision. Thus, the number of resident warps on each multiprocessor reduces, and the computational efficiency decreases. The above behaviour has also been clearly observed in our experimental results. On the three GPUs used for conducting the experimental tests, the calculations on single precision are approximately 14-48, 13-21 and 1.5-2.5 times faster than those on double precision on the GPUs GT730M, M5000 and K40c, respectively. According to these experimental results, the tiled version of the GPU-accelerated AIDW method on single precision is strongly recommended in practical applications. Moreover, in various scientific computations, if the computation operated on single precision reaches the requirement of computational accuracy, then single precision should be preferred. By contrast, if high computational efficiency is required, then (i) double precision is needed to be used and (ii) a suitable GPU that can well support computations on double precision is also required, for example, the Tesla K40c GPU in our experimental tests. Conclusion In this paper, we have developed two versions of the parallel AIDW interpolation algorithm by using a single GPU, i.e. the naive version that does not profit from shared memory and the tiled version that takes advantage of shared memory. We have also implemented the naive version and the tiled version with the use of two data layouts, AoS and AoaS, on both single precision and double precision. We have evaluated the computational performance of the presented parallel AIDW algorithm on three different GPUs (i.e. GT730M, M5000 and K40c). We have observed that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented AIDW algorithm are publicly available [48].
8,570.8
2015-11-06T00:00:00.000
[ "Computer Science" ]
Graph-Based Translation Via Graph Segmentation One major drawback of phrase-based translation is that it segments an input sentence into continuous phrases. To support linguistically informed source discontinuity, in this paper we construct graphs which combine bigram and dependency relations and propose a graph-based translation model. The model segments an input graph into connected subgraphs, each of which may cover a discontinuous phrase. We use beam search to combine translations of each subgraph left-to-right to produce a complete translation. Experiments on Chinese–English and German– English tasks show that our system is significantly better than the phrase-based model by up to +1.5/+0.5 BLEU scores. By explicitly modeling the graph segmentation, our system obtains further improvement, especially on German–English. Introduction Statistical machine translation (SMT) starts from sequence-based models. The well-known phrasebased (PB) translation model (Koehn et al., 2003) has significantly advanced the progress of SMT by extending translation units from single words to phrases. By using phrases, PB models can capture local phenomena, such as word order, word deletion, and word insertion. However, one of the significant weaknesses in conventional PB models is that only continuous phrases are used, so generalizations such as French ne . . . pas to English not cannot be learned. To solve this, syntax-based models (Galley et al., 2004;Chiang, 2005;Marcu et al., 2006) take tree structures into consideration to learn translation patterns by using non-terminals for generalization. Model C D S (Koehn et al., 2003) • sequence (Galley and Manning, 2010) • • sequence and • tree This work • • graph Table 1: Comparison between our work and previous work in terms of three aspects: keeping continuous phrases (C), allowing discontinuous phrases (D), and input structures (S). However, the expressiveness of these models is confined by hierarchical constraints of the grammars used (Galley and Manning, 2010) since these patterns still cover continuous spans of an input sentence. By contrast, , and Xiong et al. (2007) take treelets from dependency trees as the basic translation units. These treelets are connected and may cover discontinuous phrases. However, their models lack the ability to handle continuous phrases which are not connected in trees but could in fact be extremely important to system performance (Koehn et al., 2003). Galley and Manning (2010) directly extract discontinuous phrases from input sequences. However, without imposing additional restrictions on discontinuity, the amount of extracted rules can be very large and unreliable. Different from previous work (as shown in Table 1), in this paper we use graphs as input structures and propose a graph-based translation model to translate a graph into a target string. The basic translation unit in this model is a connected subgraph which may cover discontinuous phrases. The main contributions of this work are summarized as follows: • We propose to use a graph structure to combine a sequence and a tree (Section 3.1). The graph contains both local relations between words from the sequence and long-distance relations from the tree. • We present a translation model to translate a graph (Section 3). The model segments the graph into subgraphs and uses beam search to generate a complete translation from left to right by combining translation options of each subgraph. • We present a set of sparse features to explicitly model the graph segmentation (Section 4). These features are based on edges in the input graph, each of which is either inside a subgraph or connects the subgraph with a previous subgraph. • Experiments (Section 5) on Chinese-English and German-English tasks show that our model is significantly better than the PB model. After incorporating the segmentation model, our system achieves still further improvement. Review: Phrase-based Translation We first review the basic PB translation approach, which will be extended to our graph-based translation model. Given a pair of sentences S, T , the conventional PB model is defined as Equation (1): The target sentence T is broken into I phrases t 1 · · · t I , each of which is a translation of a source phrase s a i . d is a distance-based reordering model. Note that in the basic PB model, the phrase segmentation is not explicitly modeled which means that different segmentations are treated equally (Koehn, 2010). The performance of PB translation relies on the quality of phrase pairs in a translation table. Conventionally, a phrase pair s, t has two properties: (i) s and t are continuous phrases. (ii) s, t is consistent with a word alignment A (Och and Ney, 2004): ∀(i, j) ∈ A, s i ∈ s ⇔ t j ∈ t and ∃s i ∈ s, t j ∈ t, (i, j) ∈ A. PB decoders generate hypotheses (partial translations) from left to right. Each hypothesis maintains a coverage vector to indicate which source words have been translated so far. A hypothesis can be extended on the right by translating an Figure 1: Beam search for phrase-based MT. • denotes a covered source position while indicates an uncovered position (Liu and Huang, 2014). uncovered source phrase. The translation process ends when all source words have been translated. Beam search (as in Figure 1) is taken as an approximate search strategy to reduce the size of the decoding space. Hypotheses which cover the same number of source words are grouped in a stack. Hypotheses can be pruned according to their partial translation cost and an estimated future cost. Graph-Based Translation Our graph-based translation model extends PB translation by translating an input graph rather than a sequence to a target string. The graph is segmented into a sequence of connected subgraphs, each of which corresponds to a target phrase, as in Equation (2): (2) where G(s i ) denotes a connected source subgraph which covers a (discontinuous) phrases i . Building Graphs As a more powerful and natural structure for sentence modeling, a graph can model various kinds of word-relations together in a unified representation. In this paper, we use graphs to combine two commonly used relations: bigram relations and dependency relations. Figure 2 shows an example of a graph. Each edge in the graph denotes either a dependency relation or a bigram relation. Note that the graph we use in this paper is directed, connected, node-labeled and may contain cycles. Bigram relations are implied in sequences and provide local and sequential information on pairs of continuous words. Phrases connected by bigram relations (i.e. continuous phrases) are known to be useful to improve phrase coverage (Hanneman and Lavie, 2009). By contrast, dependency relations come from dependency structures which model syntactic and semantic relations between words. Phrases whose words are connected by dependency relations (also known as treelets) are linguistic-motivated and thus more reliable . By combining these two relations together in graphs, we can make use of both continuous and linguistic-informed discontinuous phrases as long as they are connected subgraphs. Training Different from PB translation, the basic translation units in our model are subgraphs. Thus, during training, we extract subgraph-phrase pairs instead of phrase pairs on parallel graph-string sentences associated with word alignments. 1 An example of a translation rule is as follows: FIFA Shijiebei Juxing FIFA World Cup was held Note that the source side of a rule in our model is a graph which can be used to cover either a continuous phrase or a discontinuous phrase according to its match in an input graph during decoding. The algorithm for extracting translation rules is shown in Algorithm 1. This algorithm traverses each phrase pair s, t , which is within a length limit and consistent with a given word alignment Algorithm 1: Algorithm for extracting translation rules from a graph-string pair. Data: A word-aligned graph-string pair (G(S), T, A) Result: A set of translation pairs R 1 for each phrase t in T : | t |≤ L do 2 find the minimal (may be discontinuous) phrases in S so that |s |≤ L and s, t is consistent with A ; (lines 1-2), and outputs G(s), t ifs is covered by a connected subgraph G(s) (lines 6-8). A source phrase can be extended with unaligned source words which are adjacent to the phrase (lines 9-14). We use a queue Q to store all phrases which are consistently aligned to the same target phrase (line 3). Model and Decoding We define our model in the log-linear framework (Och and Ney, 2002) over a derivation D = r 1 r 2 · · · r N , as in Equation (3): where r i are translation rules, φ i are features defined on derivations and λ i are feature weights. In our experiments, we use the standard 9 features: two translation probabilities p(G(s)|t) and p(t|G(s)), two lexical translation probabilities p lex (s|t) and p lex (t|s), a language model lm(t) over a translation t, a rule penalty, a word penalty, an unknown word penalty and a distortion feature d for distance-based reordering. The calculation of the distortion feature d in our S s2 s1 s2 s3 1 2 3 4 5 6 7 Figure 3: Distortion calculation for both continuous and discontinuous phrases in a derivation. . model is different from the one used in conventional PB models, as we need to take discontinuity into consideration. In this paper, we use a distortion function defined in Galley and Manning (2010) to penalize discontinuous phrases that have relatively long gaps. Figure 3 shows an example of calculating distortion for discontinuous phrases. Our graph-based decoder is very similar to the PB decoder except that, in our decoder, each hypothesis is extended by translating an uncovered subgraph instead of a phrase. Positions covered by the subgraph are then marked as translated. Graph Segmentation Model Each derivation in our graph-based translation model implies a sequence of subgraphs (also called a segmentation). By default, similar to PB translation, our model treats each segmentation equally as shown in Equation (2). However, previous work on PB translation has suggested that such segmentations provide useful information which can improve translation performance. For example, boundary information in a phrase segmentation can be used for reordering models (Xiong et al., 2006;Cherry, 2013). In this paper, we are interested in directly modeling the segmentation using information from graphs. By making the assumption that each subgraph is only dependent on previous subgraphs, we define a generative process over a graph segmentation as in Equation (4): Instead of training a stand-alone discriminative segmentation model to assign each subgraph a probability given previous subgraphs, we implement the model via sparse features, each of which is extracted at run-time during decoding and then directly added to the log-linear framework, so that these features can be tuned jointly with other features (of Section 3.3) to directly maximize the translation quality. Since a segmentation is obtained by breaking up the connectivity of an input graph, it is intuitive to use edges to model the segmentation. According to Equation (4), for a current subgraph G i , we only consider those edges which are either inside G i or connect G i with a previous subgraph. Based on these edges, we extract sparse features for each node in the subgraph. The set of sparse features is defined as follows: where n.w and n.c are the word and class of the current node n, and n .w and n .c are the word and class of a node n connected to n. C, P , and H denote that the node n is in the current subgraph G i or the adjacent previous subgraph G i−1 or other previous subgraphs, respectively. Note that we treat the adjacent previous subgraph differently from others since information from the last previous unit is quite useful (Xiong et al., 2006;Cherry, 2013). in and out denote that the edge is an incoming edge or outgoing edge for the current node n. Figure 4 shows an example of extracting sparse features for a subgraph. Inspired by success in using sparse features in SMT (Cherry, 2013), in this paper we lexicalize only on the top-100 most frequent words. In addition, we group source words into 50 classes by using mkcls which should provide useful generalization (Cherry, 2013) for our model. Experiment We conduct experiments on Chinese-English (ZH-EN) and German-English (DE-EN) translation tasks. W:Zai C:5 C in W:Zai C:5 C out W:Zai C:3 P out W:Zai C:7 P in W:Nanfei C:4 C in W:Nanfei C:4 C out W:Nanfei C:6 C in W:Chenggong C:5 C out W:Chenggong C:7 P in C:4 C:5 C in C:4 C:5 C out C:4 C:3 P out C:4 C:7 P in C:5 C:4 C in C:5 C:4 C out C:5 C:6 C in C:6 C:5 C out C:6 C:7 P in Figure 4: An illustration of extracting sparse features for each node in a subgraph during decoding. The decoder segments the graph in Figure 2 into three subgraphs (solid rectangles) and produces a complete translation by combining translations of each subgraph (dashed rectangles). In this figure, the class of a word is randomly assigned. and News-Test 2013 (WMT13) are test sets. We use mate-tools 2 to perform morphological analysis and parse German sentences (Bohnet, 2010). Then, MaltParser 3 converts a parse result into a projective dependency tree (Nivre and Nilsson, 2005). Settings In this paper, we mainly report results from five systems under the same configuration. PBMT is built by the PB model in Moses (Koehn et al.,2 http://code.google.com/p/mate-tools/ 3 http://www.maltparser.org/ 2007). Treelet extends PBMT by taking treelets as the basic translation units . We implement a Treelet model in Moses which produces translations from left to right and uses beam search for decoding. DTU extends the PB model by allowing discontinuous phrases (Galley and Manning, 2010). We implement DTU with source discontinuity in Moses. 4 GBMT is our basic graph-based translation system while GSM adds the graph segmentation model into GBMT. Both systems are implemented in Moses. Word alignment is performed by GIZA++ (Och and Ney, 2003) with the heuristic function growdiag-final-and. We use SRILM (Stolcke, 2002) to train a 5-gram language model on the Xinhua portion of the English Gigaword corpus 5th edition with modified Kneser-Ney discounting (Chen and Goodman, 1996). Batch MIRA (Cherry and Foster, 2012) is used to tune weights. BLEU (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2011), and TER (Snover et al., 2006) Each score is an average over three MIRA runs (Clark et al., 2011). * means a system is significantly better than PBMT at p ≤ 0.01. Bold figures mean a system is significantly better than Treelet at p ≤ 0.01. + means a system is significantly better than DTU at p ≤ 0.01. In this table, we mark a system by comparing it with previous ones. Table 3 shows our evaluation results. We find that our GBMT system is significantly better than PBMT as measured by all three metrics across all test sets. Specifically, the improvements are up to +1.5/+0.5 BLEU, +0.3/+0.2 METEOR, and -0.8/-0.4 TER on ZH-EN and DE-EN, respectively. This improvement is reasonable as our system allows discontinuous phrases which can reduce data sparsity and handle long-distance relations (Galley and Manning, 2010). Another argument for discontinuous phrases is that they allow the decoder to use larger translation units which tend to produce better translations (Galley and Manning, 2010). However, this argument was only verified on ZH-EN. Therefore, we are interested in seeing whether we have the same observation in our experiments on both language pairs. We count the used translation rules in MT02 and WMT11 based on different target lengths. The results are shown in Figure 5. We find that both DTU and GBMT indeed tend to use larger translation units on ZH-EN. However, more smaller translation units are used on DE-EN. 5 We presume this is because long-distance reordering is performed more often on ZH-EN than on DE-EN. Based on the fact that the distortion function d measures the reordering distance, we find that the average distortion value in PB on ZH-EN MT02 is 18.4 and 5 We have the same finding on all test sets. 3.5 on DE-EN WMT11. Our observations suggest that the argument that discontinuous phrases allow decoders to use larger translation units should be considered with caution when we explain the benefit of discontinuity on different language pairs. Compared to PBMT, the Treelet system does not show consistent improvements. Our system achieves significantly better BLEU and METEOR scores than Treelet on both ZH-EN and DE-EN, and a better TER score on DE-EN. This suggests that continuous phrases are essential for system robustness since it helps to improve phrase coverage (Hanneman and Lavie, 2009). Lower phrase coverage in Treelet results in more short phrases being used, as shown in Figure 5. In addition, we find that both DTU and our systems do not achieve consistent improvements over Treelet in terms of TER. We observed that both DTU and our systems tend to produce longer translations than Treelet, which might cause unreliable TER evaluation in our experiments as TER favours shorter sentences (He and Way, 2010). Results and Discussion Since discontinuous phrases produced by using syntactic information are fewer in number but more reliable (Koehn et al., 2003), our GBMT system achieves comparable performance with DTU but uses significantly fewer rules, as shown in Table 4. After integrating the graph segmentation model to help subgraph selection, GBMT is further improved and the resulted system G2S has significantly better evaluation scores than DTU on both language pairs. However, our segmentation model is more helpful on DE-EN than ZH-EN. We find that the number of features learned on ZH-EN (25K+) is much less than on DE-EN (49K+). This may result in a lower feature coverage during decoding. The lower number of features in ZH-EN could be caused by the fact that the development set MT02 has many fewer sentences than WMT11. Accordingly, we suggest to use a larger development set during tuning to achieve better translation performance when the segmentation model is integrated. Our current model is more akin to addressing problems in phrase-based and treelet-based models by segmenting graphs into pieces rather than extracting a recursive grammar. Therefore, similar to those models, our model is weak at phrase reordering as well. However, we are interesting in the potential power of our model by incorporating lexical reordering (LR) models and comparing it with syntax-based models. Table 5 shows BLEU scores of the hierarchical phrase-based (HPB) system (Chiang, 2005) in Moses 6 and GBMT combined with a word-based Table 5: BLEU scores of a Moses hierarchical phrase-based system (HPB) and our system (GBMT) with a word-based lexical reordering model (LR). LR model (Koehn et al., 2005). We find that the LR model significantly improves our system. GBMT+LR is comparable with the Moses HPB model on Chinese-English and better than HPB on German-English. Figure 6 shows three examples from MT04 to better explain the differences of each system. Example 1 shows that systems which allow discontinuous phrases (namely Treelet, DTU, GBMT, and GSM) successfully translate a Chinese collocation "Yu . . . Wuguan" to "have nothing to do with" while PBMT fails to catch the generalization since it only allows continuous phrases. In Example 2, Treelet translates a discontinuous phrase "Dui . . . Zuofa" (to . . . practice) only as "to" where an important target word "practice" is dropped. By contrast, bigram relations allow our systems (GBMT and GSM) to find a better phrase to translate: "De Zuofa" to "of practice". In addition, DTU translates a discontinuous phrase "De Zuofa . . . Buman" to "dissatisfaction with the approach of". However, the phrase is actually not PBMT: the united states government to brazil has repeatedly expressed its dissatisfaction . Examples Treelet: the government of brazil to the united states has on many occasions expressed their discontent . DTU: the united states has repeatedly expressed its dissatisfaction with the approach of the government to brazil . GBMT: the us government has repeatedly expressed dissatisfaction with the practice of brazil . GSM: the us government has repeatedly expressed dissatisfaction with the practice of brazil . the us government has repeatedly expressed dissatisfaction with the practice of brazil . Example 3 PBMT: the government and all sectors of society should continue to explore in depth and draw on collective wisdom . Treelet: the government must continue to make in-depth discussions with various sectors of the community and the collective wisdom . DTU: the government must continue to work together with various sectors of the community to make an in-depth study and draw on collective wisdom . GBMT: the government must continue to work together with various sectors of the community in-depth study and draw on collective wisdom . GSM: the government must continue to make in-depth discussions with various sectors of the community and draw on collective wisdom . REF: the government must continue to hold thorough discussions with all walks of life to pool the wisdom of the masses . government must continue with society each community make in-depth discussion , draw collective wisdom . the government must continue to make in-depth discussions with various sectors of the community and draw on collective wisdom . Figure 6: Translation examples from MT04 produced by different systems. Each source sentence is annotated by dependency relations and additional bigram relations (dotted red edges). We also annotate phrase alignments produced by our system GSM. linguistically motivated and could be unreliable. By disallowing phrases which are not connected in the input graph, GBMT and GSM produce better translations. Example 3 illustrates that our graph segmentation model helps to select better subgraphs. After obtaining a partial translation "the government must", GSM chooses to translate a subgraph which covers a discontinuous phrase "Jixu . . . Zuo" to "continue to make" while GBMT translates "Jixu Yu" (continue . . . with) to "continue to work together with". By selecting the proper subgraph to translate, GSM performs a better reordering on the translation. Related Work Starting from sequence-based models, SMT has been benefiting increasingly from complex structures. Sequence-based MT: Since the breakthrough made by IBM on word-based models in the 1990s (Brown et al., 1993), SMT has developed rapidly. The PB model (Koehn et al., 2003) advanced the state-of-the-art by translating multi-word units, which makes it better able to capture local phenomena. However, a major drawback in PBMT is that only continuous phrases are considered. Galley and Manning (2010) extend PBMT by allowing discontinuity. However, without linguistic structure information such as syntax trees, sequence-based models can learn a large amount of phrases which may be unreliable. Tree-based MT: Compared to sequences, trees provide recursive structures over sentences and can handle long-distance relations. Typically, trees used in SMT are either phrasal structures (Galley et al., 2004;Marcu et al., 2006) or dependency structures Xiong et al., 2007;Xie et al., 2011;Li et al., 2014). However, conventional treebased models only use linguistically well-formed phrases. Although they are more reliable in theory, discarding all phrase pairs which are not linguistically motivated is an overly harsh decision. Therefore, exploring more translation rules usually can significantly improve translation performance (Marcu et al., 2006;DeNeefe et al., 2007;Mi et al., 2008). Graph-based MT: Compared to sequences and trees, graphs are more general and can represent more relations between words. In recent years, graphs have been drawing quite a lot of attention from researchers. Jones et al. (2012) propose a hypergraph-based translation model where hypergraphs are taken as a meaning representation of sentences. However, large corpora with annotated hypergraphs are not readily available for MT. Li et al. (2015) use an edge replacement grammar to translate dependency graphs which are converted from dependency trees by labeling edges. However, their model only focuses on subgraphs which cover continuous phrases. Conclusion In this paper, we extend the conventional phrasebased translation model by allowing discontinuous phrases. We use graphs which combine bigram and dependency relations together as inputs and present a graph-based translation model. Experiments on Chinese-English and German-English show our model to be significantly better than the phrase-based model as well as other more sophisticated models. In addition, we present a graph segmentation model to explicitly guide the selection of subgraphs. In experiments, this model further improves our system. In the future, we will extend this model to allow discontinuity on target sides and explore the possibility of directly encoding reordering information in translation rules. We are also interested in using graphs for neural machine translation to see how it can translate and benefit from graphs.
5,682.6
2016-08-01T00:00:00.000
[ "Computer Science" ]
A general class of $C^1$ smooth rational splines: Application to construction of exact ellipses and ellipsoids In this paper, we describe a general class of $C^1$ smooth rational splines that enables, in particular, exact descriptions of ellipses and ellipsoids - some of the most important primitives for CAD and CAE. The univariate rational splines are assembled by transforming multiple sets of NURBS basis functions via so-called design-through-analysis compatible extraction matrices; different sets of NURBS are allowed to have different polynomial degrees and weight functions. Tensor products of the univariate splines yield multivariate splines. In the bivariate setting, we describe how similar design-through-analysis compatible transformations of the tensor-product splines enable the construction of smooth surfaces containing one or two polar singularities. The material is self-contained, and is presented such that all tools can be easily implemented by CAD or CAE practitioners within existing software that support NURBS. To this end, we explicitly present the matrices (a) that describe our splines in terms of NURBS, and (b) that help refine the splines by performing (local) degree elevation and knot insertion. Finally, all $C^1$ spline constructions yield spline basis functions that are locally supported and form a convex partition of unity. Introduction Multivariate splines are used extensively for computeraided design (CAD) and, more recently, for computer-aided engineering (CAE). Smoothness of such splines is a particularly valuable trait. When the aim is to create a (freeform) geometric model for a smooth object, it helps if the splines used for the task are smooth themselves. For instance, this circumvents situations where small displacements to control points may produce 'non-smooth' features such as C 0 kinks, loss of curvature continuity, etc. Similarly, when the aim is to numerically approximate the solution to high-order partial differential equations (PDEs) using isogeometric analysis (IGA) -a generalization of classical finite elements [8] high smoothness of the approximating spaces can be beneficial. For instance, it can allow us to directly discretize the PDEs without any auxiliary variables, thus yielding simpler and more efficient implementations. In this paper, we discuss a general class of C 1 smooth rational splines that allow for the construction of C 1 smooth curves and surfaces. These are an extension of classical C 1 non-uniform rational B-splines (NURBS) as they enjoy the flexibility of choosing locally unrelated weight functions as well as the option of local degree elevation -they can be roughly regarded as piecewise-NURBS. At the same time, they maintain intuitive control-point-based design. Moreover, they enable simple (low-degree) and smooth descriptions of some of the most important primitives for CAD and CAE (but also for computer vision, graphics and robotics): closed, real, non-degenerate quadrics -that is, ellipses in two dimensions and ellipsoids in three dimensions. The ideas we present here build upon those from [25], in multiple directions, and their presentation is motivated by our primary objectives: self-contained, explicit, NURBScompatible descriptions that can be easily and efficiently implemented within existing CAD software. The most important novel contributions are the following. • We describe the usage of classical univariate NURBS to assemble C 1 rational multi-degree spline basis functions using an extraction matrix. The general framework was explained in [25], but we provide here a simplified exposition of the construction and a formal proof of the properties; see Remark 2.3. We mainly stick to parametric smoothness, but a construction centred around the notion of geometric smoothness can be formulated as well; see Remark 2.5. • We describe efficient refinement of the C 1 splines leveraging classical NURBS refinement. The novelty here relies in an explicit and simple construction of the refinement matrices. • We describe how tensor-product bivariate C 1 rational splines can be used to build C 1 smooth geometries that may contain one or two polar singularities; the C 1 smooth splines describing the geometries are called polar splines. As above, the idea is based on building an extraction matrix. • We describe efficient refinement of polar splines. In particular, we provide an explicit and simple construction of the refinement matrices. • We provide explicit descriptions of ellipses and ellipsoids built using low-degree C 1 splines, and we detail their extraction in terms of NURBS so that they can be readily implemented and used in CAD or CAE software. Table 1 summarizes the descriptions included in this paper. Extraction matrices At the core of our approach is the notion of the socalled design-through-analysis (DTA) compatible extraction matrix. 1 Roughly speaking, such matrix helps us assemble 'simple splines' into 'more general splines.' Examples are the Bézier extraction matrix introduced to assemble Bernstein polynomials into B-/T-splines [3,19]; the multidegree extraction matrix for assembling elements of extended Tchebycheff spaces into generalized Tchebycheffian B-splines [22,26,7]; and the unstructured spline extraction matrices for assembling tensor-product splines into splines on unstructured quadrilateral meshes [25,27,24]. Here, we apply the concept of extraction in the following context. We start from multiple sets of (univariate or bivariate) NURBS basis functions defined on adjacent domains, and collect all of these functions in the set {b j : j = 1, . . . , m}. Then, we assemble them into more general C 1 rational (polar) splines using a matrix C (with entries C i j ), called the extraction matrix. Denote this new set of splines by {N i : i = 1, . . . , n}, where n < m. These are defined as follows, We are particularly interested in matrices C such that the functions N i • satisfy certain smoothness constraints that may or may not be satisfied by the b j , and • possess the properties of non-negativity, locality, linear independence and partition of unity that the b j already possess. Such extraction matrices are called DTA-compatible. It is easy to see that the action of a DTA-compatible extraction matrix on a convex partition of unity, local basis gives rise to another local basis that also forms a convex partition of unity. Indeed, by summing over i in Equation (1), we have as the b j form a partition of unity. Since C has non-negative entries and is a full-rank matrix, non-negativity and linear independence of N i follow from the non-negativity and linear independence of b j . Related literature As mentioned in the previous section, the construction of smooth univariate splines by joining simpler pieces has been recently explored in [25,22,26] for polynomial multidegree splines, and in [7] for generalized Tchebycheffian splines. These approaches have conceptual similarities with the notion of beta-splines [2]. The main differences are that the former approaches do not rely on symbolic computations while the latter does, and the former approaches consider parametric continuity while the latter studies geometric continuity. The use of smooth univariate rational splines for construction of circles has been previously explored in [1,12,13]. It is known that a circle cannot be represented by a single (symmetric) periodic C 1 quadratic NURBS curve [17, Section 7.5] nor a C 2 cubic NURBS curve [5,Section 13.7]. However, it is possible to find C p smooth descriptions using NURBS of degree 2(p+1), which is shown to be the minimal degree in [1]. On the other hand, [12,13] presented a C 1 piecewise quadratic NURBS description of the circle and used it for IGA. Our rational multi-degree splines form a flexible extension of the latter framework, and allow for a variety of exact descriptions of circles using low (multi-)degrees, as indicated in Table 1. In two dimensions, closed quadrics or, more generally, smooth closed surfaces of genus zero can be built using tensor-product splines by introducing polar singularities. For such polar surfaces, subdivision schemes producing C 1 surfaces [10,14] and C 2 surfaces [9,15] have been previously worked out. The corresponding limit surfaces consist of an infinite sequence of surface rings where the faces shrink to a point in the limit. A more CAD-friendly finite construction was developed in [16]; this approach constructs 'shape' basis functions for C 2 polar splines with bi-degree (6, 3). These basis functions correspond to unique Fourier frequencies in the polar expansion of a quadratic surface. The 'shape' basis does not enjoy non-negativity and does not form a partition of unity, and extensions of it to higher smoothness leads to degrees of freedom that control non-intuitive shape parameters. Similar recent constructions for obtaining C 1 polar spline caps can be found in [11]. Curvature continuous polar NURBS surfaces were discussed in [21], and [20] presented a construction of polar caps using periodic B-spline surfaces with G n continuity for arbitrary n. On the CAE side, a standard circular serendipity-type element for IGA was proposed in [12], and C k smooth basis functions over singular parametrizations of triangular domains were constructed in [23]. A design-through-analysis friendly construction of C k smooth polar surfaces was recently proposed in [25], and the current work builds further upon this construction. A completely different approach for dealing with curves and surfaces is the use of implicit representations [6]. Such representations enjoy nice geometric properties (especially for simple shapes). For instance, they allow for a straightforward point membership classification. On the other hand, explicit smooth B-spline representations are more convenient for direct geometric modeling and (local) modification. Outline In Section 2, we present the construction and refinement of C 1 smooth rational multi-degree spline curves via explicitly defined extraction and refinement matrices. The construction and refinement of C 1 smooth polar surfaces using Piecewise-rational curves In this section, we focus on a multi-degree extension of univariate NURBS splines. The multi-degree spline space is defined as a collection of classical NURBS spaces (with possibly different polynomial degrees and weight functions) glued together C 1 smoothly. For such space we present a construction of a set of basis functions, with similar properties to classical NURBS. After discussing some preliminary material on NURBS in Section 2.1, we elaborate how these basis functions can be computed through a DTA-compatible extraction matrix in Section 2.2. A more general but also more complex algorithmic construction has been detailed in [25,Section 2] and further explored in [22,26] for polynomial multi-degree splines. Then, in Section 2.3, we give an explicit procedure how to compute a refined representation of a given curve. Finally, in Section 2.4, we illustrate how this tool can be used to describe arbitrary ellipses in a C 1 smooth fashion using low-degree piecewise-rational curve representations suited for integrated design and analysis. Preliminaries on NURBS We start by defining notation for NURBS basis functions, and introduce some classical relations that can be found, e.g., in [18,17]. Given a basic interval I := [x 1 , x 2 ] ⊂ , let us denote with ξ an open knot vector of degree p ∈ and length n+p+1 ∈ , i.e., ξ := [ξ 1 , ξ 2 , . . . , ξ n+p+1 ], ξ i+1 ≥ ξ i , The number of times a knot value ξ i is duplicated in the knot vector is called the knot's multiplicity. The multiplicity of ξ i is denoted with m i , and we assume that 1 ≤ m i ≤ p − 1. The corresponding set of B-splines {b j,p : j = 1, . . . , n} are defined using the recursive relation, and under the convention that fractions with zero denominator have value zero. With the above definition, all the B-splines take the value zero at the end point x 2 . Therefore, in order to avoid asymmetry over the interval I, it is common to assume the B-splines to be left continuous at x 2 . We will follow suit. Let us denote with w a weight vector of length n, i.e., w := [w 1 , w 2 , . . . , w n ], w i > 0. The corresponding set of NURBS {b w j,p : j = 1, . . . , n} are defined by . Each b w j,p is non-negative on I and is locally supported on [ξ j , ξ j+p+1 ]. Moreover, the functions b w j,p are linearly independent and form a partition of unity. They satisfy the following end-point conditions: The NURBS space corresponding to ξ and w is denoted with [ξ, w ] and is defined as the span of {b w j,p : j = 1, . . . , n}. This is a space of piecewise-rational functions of degree p with smoothness C p−m i at knot ξ i and its dimension is n. The assumption on the multiplicity will ensure us global C 1 smoothness. Note that when w 1 = · · · = w n , the members of this space are piecewise-polynomial. Remark 2.1. The structure of ξ in Equation (2) is such that p, m i , n and I are embedded in it. Therefore, we will assume that once a knot vector ξ is known, so are the degree, smoothness, and dimension of the corresponding NURBS space [ξ, w ]. We identify a function f ∈ [ξ, w ] with the vector of its coefficients [ f 1 , . . . , f n ], Only the first (last) k + 1 basis functions contribute towards the k-th order derivative at the left (right) end point of I. In particular, we have A NURBS curve embedded in d , d ≥ 2, can be constructed as where f j ∈ d are the control points assigned to each basis function. All coordinate functions of this curve belong to [ξ, w ] and therefore all the above relations hold for them. Rational multi-degree B-splines Consider m open knot vectors ξ (i) of degree p (i) , i = 1, . . . , m, defined as in Equation (2). We denote the left and right end points of the interval I (i) associated to ξ (i) with x (i) 1 and x (i) 2 , respectively. The collection Ξ := (ξ (1) , . . . , ξ (m) ) is called an m-segment knot vector configuration. The multidegree spline spaces will be constructed by considering spline spaces over the knot vectors ξ (i) , which are glued together with certain smoothness requirements at the end points x is called the i-th segment join. We define the mapping φ (i) for each segment i = 1, . . . , m, for an arbitrarily chosen origin τ ) ⊂ , and we construct the composed interval (1) , . . . , w (m) ) be a sequence of weight vectors defined as in Equation (3). We refer the reader to Figure 1 for a visual illustration of the notation of the above concepts, in case m = 2 and p (1) = 2, p (2) = 3. The space of rational multi-degree splines is defined as and the periodic space of rational multi-degree splines as Figure 1: A visual illustration of the notation and the construction of rational multi-degree B-splines as described in Section 2.2. Here, the quadratic (blue) and cubic (red) NURBS shown at the bottom are used to build the C 1 multi-degree B-splines shown at the top. The elements of [Ξ, W ] and per [Ξ, W ] are piecewise-NURBS functions such that the pieces meet with C 1 continuity at each segment join. It is clear that classical NURBS spaces are a special case of the rational multi-degree spline spaces. In the following, we build a suitable basis for the spaces In the first step, we map these basis functions from I (i) to Ω (i) using φ (i) in Equation (5), and extend them on the entire interval Ω by defining them to be zero outside Ω (i) . More precisely, specifying the cumulative local dimensions µ i for i = 0, . . . , m, we define for i = 1, . . . , m and j = 1, . . . , n (i) , For the sake of simplicity, we dropped the reference to the (local) degree and weight in the notation. From the properties of NURBS, it is clear that the functions b 1 , . . . , b µ m are linearly independent and form a non-negative partition of unity on Ω. We arrange these basis functions in a column vector b of length µ m . We refer the reader again to Figure 1 for a visual illustration of the notation of the above concepts. Now, we construct extraction matrices H and H per such that the functions in {B i : i = 1, . . . , n} and {B per i span [Ξ, W ] and per [Ξ, W ], respectively. The key here, and the reason our approach can be efficiently implemented by design, is that these extraction matrices can be explicitly specified. To this end, we define counters η i for i = 0, . . . , m, and parameters α (i) and β (i) for i = 1, . . . , m − 1, In the periodic setting, α (m) and β (m) are computed using the above equations by identifying the index i + 1 with 1. Recall Equation (4) to see the motivation behind the definition of the above parameters. Then, we define a common sparse matrix H c of size η m × (µ m − 2), whose non-zero entries H c i j are identified as follows: for i = 1, . . . , m and j = 1, . . . , and for i = 1, . . . , m − 1, The desired extraction matrices in Equation (6) are then specified as follows: The number of rows in the two matrices are denoted with n := η m + 2 and n per := η m , respectively. The sparse and simple structure of both matrices means that it is easy to verify that both have full rank. Indeed, this conclusion can be directly deduced from the full rank of H c , which in turn is implied by Equation (7). Moreover, their entries are nonnegative, and the column sum is equal to one. Hence, we conclude that these matrices are DTA-compatible. How these matrices help us build C 1 splines can be understood by taking into account Equation (4) More precisely, only these four functions have non-vanishing values and first derivatives here. In view of (4), a spline f given by We can verify that the entries of H satisfy exactly such relations. Indeed, for some j, the matrix H defines two new functions B j and B j+1 such that When setting [ f 1 , f 2 , f 3 , f 4 ] equal to the first or the second row ofH (i) , we see that Equation (10) is satisfied. The following result follows from the above discussion. Remark 2.3. In [25, Section 2.3.5] it was observed that the C 1 smooth piecewise-rational basis functions enjoy the properties described in Theorem 2.2. However, a formal proof was missing. It was also pointed out that the property of nonnegativity is in general not present in case of C 2 or higher smoothness. On the other hand, this is possible when restricting to polynomial pieces [26]. Remark 2.4. With the aim of designing quadric curves, also called conics, it is natural to choose local NURBS spaces of the same degree p and defined on the same uniform knot vector ξ. Moreover, it is common to set w Under these circumstances, the ratios in Equation (8) read as Finally, if there is additional symmetry in the choice of weights, so w (i) , we simply get Once we have computed a DTA-compatible extraction matrix H (or H per ), given n control points f i ∈ d , d ≥ 2, we can construct a piecewise-rational curve f embedded in d , For a fixed curve, the transpose of H (or H per ) defines the relationship between control points of the b j (discontinuous at the segment joins) and control points of the smooth B i . More precisely, if Remark 2.5. When dealing with curves, the proposed piecewise-NURBS framework can also be formulated in the context of geometric continuity [4]. In such case, the C 1 smoothness condition at the segment join in (10) is replaced by the G 1 smoothness condition for a given geometric shape parameter γ (i) > 0, resulting in the matrixH It is clear that this matrix is still DTA-compatible. Refinement of piecewise-rational curves The rational spline spaces defined in the previous section can be refined in a multitude of ways. We could reduce the smoothness at segment joins, raise the polynomial degrees of local NURBS spaces, and/or insert new knots in local NURBS spaces [25,Section 2.4.3]. A combination of these possibilities could be judiciously employed to achieve spline spaces that provide higher resolution or approximation power exactly where needed. In this section, we present an explicit construction of refined representations of a given piecewiserational curve. Before delving into the details of the refinement procedure, we first define two matrices G and G per that can be regarded as right inverses of the extraction matrices H and H per , respectively. Looking at the structure of the matrix H c specified in Equations (7)-(8), we can define a sparse matrix G c of size (µ m − 2) × η m , whose non-zero entries G c i j are identified as follows: for i = 1, . . . , m and j = 1, . . . , n (i) − 2, From its construction it is clear that the product H c G c is equal to the identity matrix. Similarly, keeping in mind Equation (9), the matrices give rise to products HG and H per G per that are equal to identity matrices. Now, let be a given spline space and let us denote the target refined space with˜ . For simplicity of notation, we drop the superscript per in case of periodicity. Then, we consider the two unique representations of a curve f with coordinate functions in ⊂˜ , . We now seek the refinement matrix R of size n ×ñ that helps us computeF from F , i.e.,F = F R. Assume that H andH are the extraction matrices corresponding to the spaces and˜ , respectively. Incorporating these matrices in the representations in Equation (13) This implies that we can compute R by solving the following (overdetermined) linear system with a unique solution, RH = HS. After multiplication of both sides of this system with the ma-trixG (corresponding toH) as defined in Equation (12), we arrive at R = HSG. (14) Note that the application ofG in Equation (14) means that a subset ofñ columns of HS are selected to form R. Remark 2.6. The definition ofG is done for the sake of simplicity of computation of R in Equation (14), but is not unique. Any matrix that is a right inverse ofH would be a valid choice as well, such as the standard Moore-Penrose right inverseH T (HH T ) −1 . Circles and ellipses We now present the general construction of ellipses (and as a special case also circles) using the C 1 rational splines introduced thus far. We present three approaches for doing so using splines of low(est) degree, i.e., C 1 splines of quadratic degree, cubic degree and mixed quadratic/cubic multidegree. All approaches will construct four C 1 piecewise-NURBS functions B i and associated control points f i , i = 1, . . . , 4, such that the curve f , describes the exact ellipse centred at (0, 0) and with axis lengths (a x , a y ), Since the splines B i form a partition of unity, these ellipses can be affinely transformed by directly applying the transformation to the control points f i . Subdivided or higher-degree representations can be easily obtained by refining the representations provided here (see Section 2.3). To visually illustrate the smoothness of f , we will also show the curvef obtained by perturbing one of the control points. Since all B i are smooth, the perturbed curve will also be smooth. For uniformity throughout the examples, we will choose the control points of the perturbed curve as C 1 description of degree 2 Here we present a C 1 quadratic description of the ellipse in Equation (15) Figure 2 (a, top row). Finally, we can build an ellipse centred at (0, 0) and with axis lengths (a x , a y ) by combining the splines B i with the control points f i defined as Remark 2.7. To verify that the curve f satisfies Equation (15), we can proceed as follows. The simplest approach is to numerically evaluate f (t) at all t and plug the result in that equation. Alternatively, this verification can also be performed analytically by looking at the explicit expressions of the rational pieces that form f . For instance, consider the first quadratic rational piece, g (1) , that is a part of f . As discussed in Equation (11), we can get the control points of this piece, denoted with g (1) j , j ∈ {1, 2, 3}, by applying the transpose of (a submatrix of) H per from Equation (17) to a vector containing the points f i , i.e., g (1) This yields the control points 3 = (a x , 0). Combining the above control points with the NURBS basis defined on the first segment, where w(t) := (1− t) 2 + 2t(1− t)+ t 2 , some simple algebra shows that g (1) indeed satisfies Equation (15). Verifications for the other pieces of f can be similarly done. Example 2.8. Choosing a x = a y = 1, we obtain a circle of radius 1, as shown in Figure 2 (a, middle row). This C 1 quadratic description is equivalent to the one used in [12]. The choice a x = 2a y = 1 yields an ellipse with axis lengths (1, 1 2 ), as shown in Figure 2 (a, bottom row). The perturbed versions of these conics, with the control points chosen as in Equation (16), are shown as well and they remain clearly smooth. C 1 description of degree 3 Here we present a C 1 cubic representation of the ellipse in Equation (15) These basis functions are shown in Figure 2 (b, top row). Choosing the associated control points f i as we get a C 1 cubic description of an ellipse centred at (0, 0) with axis lengths (a x , a y ). This can be verified in the vein of Remark 2.7. Example 2.9. Choosing a x = a y = 1, we obtain a circle of radius 1, as shown in Figure 2 (b, middle row). This C 1 cubic description is equivalent to the one used in [25]. The choice a x = 2a y = 1 yields an ellipse with axis lengths (1, 1 2 ), as shown in Figure 2 (b, bottom row). The perturbed versions of these conics, with the control points chosen as in Equation (16), are shown as well and they remain clearly smooth. It can be observed that, compared to the description from Section 2.4.1, the control points here are at a greater distance from the curve. This is completely analogous to the behavior of classical NURBS. Then, we can define four C 1 multi-degree piecewise-NURBS functions B i on Ω using the extraction matrix Choosing the associated control points f i as we get a C 1 multi-degree description of an ellipse centred at (0, 0) with axis lengths (a x , a y ). This can be verified in the vein of Remark 2.7. Example 2.11. Choosing a x = a y = 1, we obtain a circle of radius 1, as shown in Figure 2 (c, middle row). The choice a x = 2a y = 1 yields an ellipse with axis lengths (1, 1 2 ), as shown in Figure 2 (c, bottom row). The perturbed versions of these conics, with the control points chosen as in Equation (16), are shown as well and they remain clearly smooth. Once again, compared to the description from Section 2.4.1, the control points here lie at a greater distance from the cubic portion of the curve. Piecewise-rational polar surfaces In this section, we describe how to construct C 1 smooth representations for polar surfaces containing single or double polar singularities (e.g., hemispheres and spheres, respectively). Such surfaces can be obtained by starting from a bivariate tensor-product (piecewise-NURBS) spline patch and collapsing one or two of its edges, respectively, as illustrated in Figure 3. Each of such edge collapses creates a polar point and can be achieved by coalescing the control points related to basis functions with non-zero values on the edge. In general, however, this control-point coalescing will introduce kinks at the poles and the surface representation will not be smooth. To achieve overall smoothness, additional conditions need to be satisfied by the control points [25, Section 3]. In Section 3.1, we derive C 1 smoothness conditions at a polar point, and they enable us to build smooth polar splines as linear combinations of bivariate tensor-product splines in Section 3.2. Then, in Section 3.3, we give an explicit procedure how to compute a refined representation of a given polar surface. Finally, in Section 3.4, we present explicit descriptions of arbitrary ellipsoids using C 1 smooth low-degree polar spline representations suited for integrated design and analysis. Smoothness conditions at the polar points A polar surface will be smooth at a polar point if it can be locally (re)parameterized in a smooth way. Such parameterizations can be specified in a constructive manner and we elaborate upon it in this section. The resulting conditions will help us build smooth polar B-splines in the next section. As shown in Figure 3, we first describe the initial setupa tensor-product spline space on a rectangular domain. We start from two univariate C 1 rational spline spaces s , t defined on the univariate domains Ω s := [s 1 , s 2 ] and Ω t := [t 1 , t 2 ], respectively; the superscripts of s and t are meant to indicate the symbols used for the respective coordinates. Using a Cartesian product, we build the rectangular domain Ω := Ω s × Ω t , and on Ω we define the tensor-product spline space := s ⊗ t . Without loss of generality, we assume that s 1 = t 1 = 0. This tensor-product spline space is spanned by tensor-product B-spline basis functions B i j , i = 1, . . . , n s ; j = 1, . . . , n t . Here, n s and n t denote the respective dimensions of the chosen univariate spline spaces; the basis functions spanning these spaces are denoted with B s i and B t j . Then, the tensor-product basis function B i j is simply the product B s i B t j . The functions B i j are assumed to be periodic in s and non-periodic in t. Now, let us use the functions B i j to map the domain Ω to a polar surface using edge-collapse. Then, the smoothness conditions at a collapsed edge will only involve those B i j that have non-zero first derivatives there. Observe that, if n t ≥ 4, then any B i j with non-zero first derivatives at the bottom edge of Ω will have zero first derivatives at its top edge, and vice versa. The upshot is that, when we are collapsing both the bottom and top edges of Ω into two polar points, as in Figure 3 (b), the smoothness conditions at those points are independent of each other and can be resolved separately for n t ≥ 4. Single polar point In light of the above discussion, in the following we first focus on the case of a single collapsed edge, i.e., the one shown in Figure 3 (a). We derive smoothness conditions that will help us build smooth polar spline functions (and, using them, smooth polar surfaces). This is done by explicitly specifying the parameterization with respect to which the spline functions are deemed smooth. First, we construct a planar disklike domain Ω pol , called the polar parametric domain, via a suitable polar map F ; see Figure 3 (a). Next, for an arbitrary In general, f pol will be multivalued at the pole. Finally, we derive the required smoothness conditions by asking for f pol to be C 1 smooth at the polar point. We start by building F . Assign the control point F i j := (ρ j cos(θ i ), ρ j sin(θ i )) ∈ 2 to the basis function B i j , where and The above choice of control-point values has been made in the interest of standardization and is not unique. Using these control points, we can construct the disk-like domain Ω pol with the aid of the map F from Ω to Ω pol , t). (21) Note that the above construction will not necessarily yield an exactly circular domain Ω pol ; its shape will depend on the choice of . This domain will serve as the reference element for the smoothness of polar configurations, i.e., we will define polar splines such that they are C 1 smooth functions over Ω pol . It is clear that for all s ∈ Ω s , where (0, 0) ∈ Ω pol is the polar point. Note that this implies Let B pol i j be the image of B i j under the polar map F : Ω → Ω pol in Equation (21) so that Then, for given coefficients f i j , a polar spline function f pol over Ω pol can be constructed as We can pull f pol back to Ω as follows, Moreover, by using the chain rule we can also relate the partial derivatives of f and f pol : For f pol to be C 1 smooth at the polar point, there must exist real values α, β, γ such that In view of (22) and (23) this means for all s ∈ Ω s , In particular, since only B i j , j ≤ 2, have non-zero values and derivatives when t = 0, the above condition translates to the following requirement for all s ∈ Ω s , Double polar point Equation (26) shows the required smoothness conditions when the bottom edge of Ω is being collapsed. Next, if we also want to collapse the top edge of Ω, we can repeat the previous argument with minor changes. We would, of course, need to choose a mapF that collapses the edge Ω s × {t 2 } instead. One way of achieving this could be by choosing the control pointsF i j := (ρ j cos(θ i ),ρ j sin(θ i )) ∈ 2 , whereρ j := 1 − ρ j ,θ i := 2π − θ i . Then, we can follow the same argument as in Section 3.1.1. Asking for C 1 smoothness off pol is equivalent to asking that there exist real valuesα,β,γ such that for all s ∈ Ω s , Note once again that the smoothness at the polar point corresponding to t = 0 is imposed with respect to the parameterization F (Ω), while that at the polar point corresponding to t = t 2 is imposed with respect to the parameterizationF (Ω). The corresponding smoothness conditions in Equations (26) and (27) involve different coefficients f i j for n t ≥ 4, and so can be resolved separately. Remark 3.1. The choices of θ i , ρ j ,θ i ,ρ j are such that the maps F andF preserve the orientation of the parametric domain Ω. Rational polar B-splines at the polar points We now elaborate how the derived C 1 smoothness constraints at a polar point will enable the computation of a DTA-compatible extraction matrix. This matrix represents a linear map to a set of polar spline basis functions that are C 1 smooth on the polar parametric domain. Single polar point As before, we start by considering the case of a single collapsed edge, i.e., the one shown in Figure 3 are C 1 at the polar point. For fixed j, the set {B pol i j : i = 1, . . . , n s } is called the ( j − 1)-th polar ring of basis functions. When j > 2, all basis functions in the ( j − 1)-th ring already satisfy the C 1 continuity conditions at the polar point (their derivatives are identically zero there), so they can be included without modifications in the set of polar basis functions being created. The others will be substituted by three smooth polar basis functions. This dictates that E will be a matrix, with n := n s (n t − 2) + 3 rows and n s n t columns, taking the following sparse block-diagonal form: where I is the identity matrix of size n s (n t −2)×n s (n t −2) and E is a matrix of size 3 × 2n s . The entry ofĒ corresponding to its l-th row and (i + ( j − 1)n s )-th column is denoted with E l,(i j) . We can then rewrite Equation (28) as follows for l = 1, 2, 3, We can pull these back to Ω using Equation (24) to obtain the equivalent representation for l = 1, 2, 3, We will enforce C 1 continuity at the polar point by requiring the basis functions N pol l to satisfy a linearly independent Hermite data set at the polar point, in the spirit of Equation (25). To this end, we will use three source basis functions {T l : l = 1, 2, 3}, that provide us with the appropriate Hermite data. Given a non-degenerate triangle with vertices v 1 , v 2 and v 3 , let (λ 1 , λ 2 , λ 3 ) be the unique barycentric coordinates of point (u, v) with respect to such that Then, we define T l (u, v) := λ l , l = 1, 2, 3. These functions can be interpreted as triangular Bernstein polynomials of degree 1. They are non-negative on the domain triangle . Moreover, they are linearly independent, form a partition of unity, and span the space of bivariate polynomials of total degree less than or equal to 1. Then, we require that N l in Equation (30) is a spline function f such that it satisfies the continuity constraints in Equation (26), with for l = 1, 2, 3. In the interest of standardization, we choose the triangle as equilateral with vertices recall the definition of ρ 2 from Equation (19). After some calculations, we deduce that This relation says that Ē 1,(i2) ,Ē 2,(i2) ,Ē 3,(i2) are simply the barycentric coordinates of the control point F i2 := (ρ 2 cos(θ i ), ρ 2 sin(θ i )) with respect to . It is easily checked that encloses the circle centred at (0, 0) with a radius of ρ 2 , and hence Ē 1,(i2) ,Ē 2,(i2) ,Ē 3,(i2) are guaranteed to be nonnegative. In summary,Ē is specified as · · ·Ē 1,(i2) · · ·Ē 1,(n s 2) 1 3 · · · 1 3Ē 2, (12) · · ·Ē 2,(i2) · · ·Ē 2,(n s 2) (31) This matrix has full rank and the column sum is equal to one, thus confirming that E is DTA-compatible. The following result follows from the above discussion. : l = 1, . . . , n} are linearly independent, locally supported, and form a convex partition of unity on Ω pol . Remark 3.3. As long as is chosen to be a triangle enclosing the first polar ring of control points F i j for a given configuration, we are guaranteed non-negative extraction coefficients. It is only in the interest of standardization that we have chosen to fix as an equilateral triangle with a fixed pattern of vertices. Given n control points f l ∈ d , d ≥ 3, we can construct a C 1 polar surface f embedded in d , or, equivalently, after pulling back to Ω, The behavior of f at the polar point is going to be fully specified by the first three control points f 1 Double polar point When dealing with double polar surfaces, the spline construction can be obtained by collapsing a pair of two opposite edges as illustrated in Figure 3 (b). As explained in Section 3.1, the smoothness treatment of the two poles can be done separately for n t ≥ 4. In this case, each pole leads to a local extraction matrix by applying the same procedure as in Section 3.2.1 and the combined global extraction matrix takes the following sparse block-diagonal form: where I is the identity matrix of size n s (n t − 4) × n s (n t − 4) andĒ (i) , i = 1, 2, are matrices of size 3 × 2n s . By choosing the two polar parameterizations F (Ω) andF (Ω) specified in Section 3.1, it is easily verified that one can set whereĒ is the matrix defined in Equation (31) and J k is the exchange matrix of size k × k, i.e., an anti-diagonal matrix of the form The extraction matrix E can then be used to compute the set of spline functions {N l : l = 1, . . . , n} in terms of the tensorproduct functions {B i j : i = 1, . . . , n s ; j = 1, . . . , n t }. Similar to the single-pole result in Corollary 3.4, these spline functions have the following properties. Refinement of piecewise-rational polar surfaces A polar spline surface can be refined in a manner similar to the one discussed in Section 2.3. We begin with the following observation. Consider the 3 × 3 matrix for some angles θ ι and θ κ ∈ {θ ι , θ ι + π} selected from the set in Equation (20). This a submatrix ofĒ, defined in Equation (31), consisting of three linearly independent columns. Its inverse is given bȳ Then, we define a sparse matrixD of size 2n s × 3, whose non-zero entriesD i j are identified as follows: for j = 1, 2, 3, From its construction it is clear that the productĒD is equal to the identity matrix. Note that the product (J 3Ē J 2n s )(J 2n sD J 3 ) is also equal to the identity matrix. Hence, keeping in mind the definition of E in Equations (34)-(35), the matrix gives rise to a product ED that is equal to the identity matrix. A similar matrix D can be found (with only two diagonal blocks) for the matrix E defined in Equation (29). Any refinement matrix of polar splines can then be built using the following procedure. First, we compute the control points of the tensor-product basis functions B i j as in Equation (33). Then, we refine the tensor-product control points using a tensor product of univariate refinement matrices (see Section 2.3). Denote this matrix with S, and denote the polar spline extraction matrices before and after refinement with E andẼ, respectively. Then, control points of the refined polar spline basis functions can be obtained by applying a matrix R to the original set of polar control points, where R is computed by solving the following (overdetermined) linear system with a unique solution, Example 3.8. The box at the top in Figure 4 (a) shows a bidegree (2, 2) unit sphere built by choosing a x = a y = a z = 1, while the box at the bottom shows a bi-degree (2, 2) ellipse with axis lengths (1, 1 2 , 1 3 ) built by choosing a x = 2a y = 3a z = 1. These descriptions use only 8 rational pieces. In each box, the figure at the top shows the exact quadric, while the figure at the bottom shows the deformed quadric obtained by perturbing the control points as per Equation (39). The exact and deformed surfaces are all C 1 smooth at the poles. 3.4.2. C 1 description of degree (2,3) For the second approach, we choose Ω s = [0, 4] and Ω t = [0, 1], and build the univariate rational spline spaces s (periodic) and t on them using the following sets of parameters: (17) and H t = I 4 , respectively. The full tensor-product extraction matrix H is obtained as in Equation (40). Moreover, the polar extraction operator E is equal to the matrix in Equation (41). The latter matrix maps a total of 16 tensor-product C −1 piecewise-NURBS B j of degree (2, 3) to a total of 6 C 1 polar splines N l . Equivalently, EH maps a total of 48 tensor-product NURBS b j of degree (2, 3) to a total of 6 C 1 polar splines N l . These relations are encapsulated in the following equation, Note that, since we are using a higher-degree representation compared to Section 3.4.1, the control points move farther away from the spline surface, mimicking the behavior of classical NURBS. Example 3.9. The box at the top in Figure 4 (b) shows a bidegree (2, 3) unit sphere built by choosing a x = a y = a z = 1, while the box at the bottom shows a bi-degree (2, 3) ellipse with axis lengths (1, 1 2 , 1 3 ) built by choosing a x = 2a y = 3a z = 1. These descriptions use 4 rational pieces. In each box, the figure at the top shows the exact quadric, while the figure at the bottom shows the deformed quadric obtained by perturbing the control points as per Equation (39). The exact and deformed surfaces are all C 1 smooth at the poles. (3,3) Finally, for the third approach, we choose Ω s = [0, 2] and Ω t = [0, 1], and build the univariate rational spline spaces s (periodic) and t on them using the following sets of parameters: The corresponding piecewise-NURBS extraction operators are H s = H per defined in Equation (18) and H t = I 4 , respectively. The full tensor-product extraction matrix H is obtained as in Equation (40). Moreover, the polar extraction operator E is equal to the matrix in Equation (41). The latter matrix maps a total of 16 tensor-product C −1 piecewise-NURBS B j of degree (3, 3) to a total of 6 C 1 polar splines N l . Equivalently, EH maps a total of 32 tensor-product NURBS b j of degree (3, 3) to a total of 6 C 1 polar splines N l . These relations are encapsulated in the following equation, Observe again that, since we are using a higher-degree representation compared to Sections 3.4.1 and 3.4.2, the control points move even farther away from the spline surface, mimicking the behavior of classical NURBS. Remark 3.11. The examples presented here have focused on the simplest possible C 1 descriptions of quadrics, namely descriptions that either use lowest-degree splines -bi-degree (2, 2) -or the smallest number of polynomial pieces -two. Unsurprisingly, these simplest descriptions can lead to large control triangles since each control point influences a large portion of the spline surface. Nevertheless, localized control is easily attained upon refinement (see Section 3.3) and, in particular, refinement also leads to much smaller control triangles that offer much finer geometric control. The surface shown in Figure 5 illustrates this point. This surface has been obtained by refining and modifying the control points of the bi-degree (2, 2) sphere from Figure 4 Conclusions We have presented a general class of C 1 smooth rational splines that allow for the construction and refinement of C 1 smooth curves and (polar) surfaces. They are built by gluing together multiple sets of NURBS basis functions with C 1 smoothness using a DTA-compatible extraction matrix. The main features of the splines we have built are the following: • all standard properties of NURBS, including support for intuitive control-point-based design, • (local) degree elevation and knot insertion based on classical NURBS refinement, • low-degree C 1 descriptions of exact ellipses and ellipsoids, and • compatibility with CAD or CAE software through the explicit representation in terms of NURBS. In particular, with regard to the last two bullets above, we believe that the explicit, NURBS-compatible C 1 descriptions of ellipses and ellipsoids provided herein will be of use to geometric modellers [11] and computational scientists [24] alike. For instance, the exact C 1 (re)parameterizations at polar points may make the design of algorithms more stable and efficient; it may also avoid the need for special treatment of polar points.
11,513.6
2020-12-06T00:00:00.000
[ "Mathematics" ]
IT'll Lawrence Berkeley Laboratory A "Winner-Take-All" IC for Determining the Crystal of Interaction in PET Detectors We present performance measurements of a "Winner-Take All" (WTA) CMOS integrated circuit to be used with a pixel based PET detector module. Given n input voltages, it rapidly determines the input with the largest voltage, and outputs the encoded address of this input and a voltage proportional to this largest voltage. This is more desirable than a threshold approach for applications that require exactly one channel to be identified or when noise is a significant fraction of the input signal. A sixteen input prototype has been fabricated using two 1.2 !J.m processes (HP linear MOS capacitance and Orbit double-poly capacitance). ICs from both processes reliably identify (within 50 ns) the maximum channel if !:J. V (the difference between the two highest channels) is >20 mV. The key element in the WT A circuit is an array of high gain non-linear current amplifiers. There is one amplifier for each input channel, and each amplifier is composed of only two FETs. All amplifiers are supplied by a common, limited current source, so the channel with the largest input current takes all of this supply current while the other channels receive virtually none. Thus, these amplifier outputs become a set of logical bits that identify the maximum channel, which is encoded and used to select a multiplexer input. A voltage to current converter at each input channel turns this into a voltage sensitive device. This circuit uses very· little power, drawing approximately 100 !J.A at 5 V. 1. INTRODUCTION We are designing a PET (positron emission tomography) detector module to identify 511 ke V photons from positron annihilation with good spatial and temporal resolution [1,2]. This design consists of an 8 by 8 array of ~ mm square by 30 mm deep BGO scintillator crystals coupled on one end to a single photomultiplier tube and on the opposite end to a 8 by 8 array of 3 mm square silicon photodiodes. The photomultiplier tube provides an accurate timing pulse and initial energy discrimination for the 64 crystals in the module, while the silicon photodiode array identifies the crystal of interaction. Because of the high data rates (up to 106 Hz per detector module), it is imperative that a single photodiode pixel be rapidly assigned (~100 ns) as the crystal of interaction whenever the photomultiplier tube triggers. The signal to noise ratio in the photodiode is small -typically a 700 esignal for a full 511 keV energy deposit and a 125 e-RMS noise. Compton interactions cause events with energy deposit in more than one pixel, increasing the complexity of the event topology and further reducing the signal to noise ratio. Under these conditions, a simple threshold scheme will frequently have zero or greater than one pixels above threshold, yielding ambiguous events. Therefore, we have designed a "Winner-Take-All" (WTA) circuit to rapidly identify the maximum pixel. This circuit performs a function similar to WT A circuits used in neural network applications [3,4], but uses a new design that requires significantly fewer components. CIRCUIT DESIGN A conceptual diagram of the WTA circuit is shown in Figure 1. Each input voltage is converted to a current proportional to this voltage and sent to the WT A circuit, whose main component is an array of n identical FETs whose gates and sources are tied together. The current proportional to the input voltage of channel i is applied to the drain of FET i, and since these FETs have common gates and sources, the FET with the highest drain current establishes a common operating point for all these transistors and determines V GS· These transistors have high output conductance, as shown schematically in the IV curve shown in Figure 2. Thus, Iiin defines VDi· and relatively small differences between the drain currents are transformed into relatively large differences between the drain voltages. The drain of each of these input FETs is connected to the gate of an output FET which, given an unlimited current supply, would produce an output current proportional to the square of the input voltage (minus an · offset) and further magnify the differences between input levels. However, all of these output FETs are supplied by a common, limited (30 !J.A) supply. Therefore, the channel with the highest input voltage (the winner) will take the entire output supply current, yielding one output with the entire supply current and the remainder with no output current. These output currents are used as logical bits identifying the maximum channel. Note that once the voltage to current conversion has been performed, only two transistors are required for each input channel. The total current drawn is roughly 100 JlA at 5 V and is independent of the number of input channels, as 30 JlA goes to the shared current supply of the WT A and the remaining 70 JlA is drive current for the output bits. The IC fabricated is shown schematically in Figure 3, and has some additional components to facilitate use. Each voltage input has a sample and hold circuit actuated by a common logic input to allow all inputs to be strobed simultaneously. Although the individual logical output bits are provided, an address encoder is also included to provide redundant data in a more compact format. Finally, an analog multiplexer combined with an output buffer provides an output voltage that is proportional to the maximum input voltage. CIRCUIT PERFORMANCE Sixteen input channel prototypes were fabricated using two 1.2 Jlm processes (HJ> linear MOS capacitance and Orbit double-poly capacitance). A photograph of the resulting integrated circuit is shown in Figure 4. In order to maintain compatibility with the DC output voltage of the shaper-2 LBLr-37973 amplifier that proceeds it [5], a 2.5 V input voltage corresponds to baseline input signal, thus the dynamic range of input voltages is from 2.5 to 4.0 V. Of the 15 prototype chips produced with the HP process, 14 performed reliably. Of the chips produced with the Orbit process, the WTA portion of the circuit performed reliably on 7 of the 9 chips, but the address encoder and analog output buffer failed to perform on any of these chips. It is not known whether this failure was due to a design error or a processing error. These devices were characterized by supplying 15 of the inputs with a common voltage and the 16th input with a test voltage. The voltage difference Ll V between these two voltages is gradually reduced until the first incorrect address is detected, in<;licating that the wrong input is identified as the maximum. This minimum Ll V necessary for accurate performance is defined to be the threshold voltage for a single measurement. Figure 5 plots the mean threshold voltage (averaged over all channels on all chips) as a function of common voltage. for the two processes, as well as the "worst case" threshold voltage (i.e. the largest threshold voltage found on any channel). For common voltages above 2.5 V, the mean voltage difference required for accurate identification is Ll V = 19 m V (with an RMS deviation of 11.6 mV) for the HP process and the "worst case" threshold voltage is typically below 60 mV. As the WT A circuit is essentially a current input circuit, this threshold of I9 m V corresponds to a 3 J..LA input current difference. For common voltages below the 2.5 V design voltage, the performance is slightly degraded but the circuit still operates reasonably well. The prototypes fabricated with the I.2 J..Lm Orbit double-poly capacitance process show similar properties, although Figure 5 shows that the threshol_d voltage increases significantly when the common voltage IS above 3.I V. Fioure 6 shows the distribution of threshold voltages taken "' over all channels of all HP process chips at all common voltages. It is reasonably well fit to a Gaussian distribution, althouoh there are some tails caused by data taken at common "' . voltages below 2.5 V. When the data correspondmg to common voltages below 2.5 V are removed (also shown in Figure 6), the distribution is well fit to a Gaussian with 11.6 mV RMS. Finally, Figure 7 shows the distribution of mean threshold voltage as a function of channel number, showing that there were no systematic threshold voltage differences between channels. The propagation delay of the chip is measured by applying a common 2.5 V sional to I5 channels and a voltage ramp "' . th going from 1.8 V to 2.8 V m 30 ns to the I6 (test) channel. The WTA output bit corresponding to the test channel is monitored on an oscilloscope, and found to change state 37 ns after its input ramp xoltage crosses the common voltage. While the propagation time is likely to depend on both the slew rate and the overdrive voltage of the test ramp, it easily meets the <IOO ns requirement for our application. OTHERAPPLICATIONS Although this device was developed for a PET detector module, there are other applications that could benefit from the rapid identification of the maximum input voltage. Common themes for other potential applications are: I) Multi-Element Detector Arrays. The WTA circuit is effectively a multiplexer, taking in the analog inputs of several elements and providing an analog output corresponding to the highest input, along with its digital address. 2) Low Event Multiplicity. The WTA is only capable of identifying a single channel at a time, so the detector array cannot have simultaneous signals in multiple channels that all need to be read out. 3) Poor Signal to Noise Ratio. An approach utilizing threshold discriminators would be simpler than the WT A provided that the signal was always above the threshold and the noise was always below the same threshold. However, when the signal to noise is poor, or when a single interaction causes spurious signal in other channels, the threshold discriminator approach is unreliable. 4) High Event Rate. An approach utilizing a microprocessor to search over multiple digitized inputs (perhaps with a scanning ADC to reduce electronics channel count) would be very effective, but would have difficulty achieving events rates above I 00 khz. The WT A approach can achieve rates above I 0 Mhz. A potential application is the readout of position sensitive photomultiplier tubes, which is presently done by determining a centroid in each of two views using a I6-18 resistor chain and current division [6]. This circuit could replace the resistor chain and current division circuit for determining the centroid [7]. Anger cameras, such as those used for SPECT or in an alternate PET design [8], could conceivably employ a circuit such as this to identify the photomultiplier tube with the largest signal in order to accurately determine the position of the gamma ray interaction. Another potential application is the readout of solid state detector arrays, such as double-sided CdZnTe micro-strip detectors used in coded-aperture telescopes for x-ray astronomy [9] . CONCLUSIONS We have manufactured a 16 channel prototype Winner Take All circuit that rapidly identifies (<50 ns) which input has the highest voltage, and provides both the digital address of this channel and an analog output voltage corresponding to the highest input voltage. The main portion of the circuit (the Winner Take All) is simple, requiring only two transistors per input channel. Power consumption is small -approximately 0.5 mW independent of the number of input channels. Prototype circuits have been fabricated with two 1.2 fJ.m CMOS processes and tested, and found to have reasonably good yield. The mean threshold voltage (i.e. the voltage difference required to reliably identify the input with the maximum voltage) was 19 mV with an 11.6 mV RMS deviation. The correct channel was identified in all cases when the voltage difference was >60 mV. This circuit is useful for applications that wish to economically read out detector arrays that have low event multiplicity, high event rates, and poor signal to noise ratio.
2,779
1996-06-01T00:00:00.000
[ "Engineering" ]
Explainable Trajectory Representation through Dictionary Learning Trajectory representation learning on a network enhances our understanding of vehicular traffic patterns and benefits numerous downstream applications. Existing approaches using classic machine learning or deep learning embed trajectories as dense vectors, which lack interpretability and are inefficient to store and analyze in downstream tasks. In this paper, an explainable trajectory representation learning framework through dictionary learning is proposed. Given a collection of trajectories on a network, it extracts a compact dictionary of commonly used subpaths called"pathlets", which optimally reconstruct each trajectory by simple concatenations. The resulting representation is naturally sparse and encodes strong spatial semantics. Theoretical analysis of our proposed algorithm is conducted to provide a probabilistic bound on the estimation error of the optimal dictionary. A hierarchical dictionary learning scheme is also proposed to ensure the algorithm's scalability on large networks, leading to a multi-scale trajectory representation. Our framework is evaluated on two large-scale real-world taxi datasets. Compared to previous work, the dictionary learned by our method is more compact and has better reconstruction rate for new trajectories. We also demonstrate the promising performance of this method in downstream tasks including trip time prediction task and data compression. INTRODUCTION The development of information technology and the widespread use of mobile devices have produced a large amount of GPS trajectory data.Raw trajectory data typically appears as variable-length ordered sequences, which cannot be directly input into common data mining algorithms.Trajectory representation learning, which means transforming a trajectory into an embedding vector, can standardize trajectory data, extract valuable information from redundant original data, and benefit various downstream tasks including trajectory compression, trip time estimation [1]. Recently, various deep learning based models for trajectory representation learning has been developed.For example, Yang et al. [2] introduced a model based on self-attention (T3S) that automatically adjusts the importance of spatial and structure information for different similarity measures.And they showed the effectiveness for trajectory similarity computation.In addition, in [3] the authors proposed a trajectory encoder-decoder network based on graph attention mechanism to obtain trajectory embedding and evaluate in vehicle trajectories prediction task.Before the emergence of these deep learning based methods, researchers also attempted to explore this field using traditional algorithms, including [4], wherein the authors introduce a pipelined algorithm that extract frequent underlying paths called corridor from trajectories and evaluate it using Minimum Description Length (MDL) score.Besides that, Zou et al. [5] extracted middle level features from trajectories for clustering using a cluster specific Latent Dirichlet Allocation Model.However, the representations generated by previous methods are usually dense vector whose dimensions lack semantic meanings.As a result, it is difficult to interpret the learned representation.In this paper, we introduce an explainable trajectory representation method through dictionary learning for trajectories on a network.The network is usually a road map for vehicle trajectories or a grid network for unstructured trajectories, on which trajectory can be projected using map matching [6].The basic idea is demonstrated in Figure 1.Given a collection of trajectories on a network, it extracts a compact dictionary of commonly used subpaths called "pathlets".Each trajectory can then be reconstructed by concatenating pathlets from the dictionary, similar to the process of constructing a sentence by assembling a group of words.The resulting trajectory representation is a sparse binary vector, where each dimension corresponds to a pathlet in the learned dictionary and each binary variable indicates whether the corresponding pathlet is used to reconstruct the trajectory.Such design is motivated by the observation that people's travel behavior exhibits remarkable regularity, enabling us to reconstruct majority of trajectories using a small set of movement patterns. The pathlet representation of trajectories was first explored by Chen et al. [7], who formulate the pathlet learning problem as a combinatorial optimization problem.Solved approximately using dynamic programming, the original formulation is costly to compute and lacks theoretical guarantee.We propose an algorithm using a novel dictioanry learning formulation that provides better optimality and scalability for large trajectory datasets.Specifically, in our formulation, the objective function minimizes the size of the pathlet dictionary and the average number of pathlets required to reconstruct each trajectory at the same time.We propose an efficient solution to this integer programming problem, by first solving its relaxed version and find the integer solution using randomized rounding.To ensure the scalability to large-scale road networks, we further propose a hierarchical representation scheme that compute pathlets of different granularity in multi-scale spatial partition of the map.This algorithm is evaluated on two real-world taxi datasets and some frequent mobility patterns are visualized.We also demonstrate the promising performance of this method in downstream tasks.For example, our method outperforms neural-network based methods by 4.7% in prediction accuracy on trip time prediction. PRELIMINARY Terminology.Given a dataset and a roadmap that can be formed as a directed graph = (, ), a trajectory ∈ is defined as a sequence of edges on .For each , a path on is a candidate pathlet if is a subpath of .We denote the set of all candidate pathlets traversed by T as . Given a pathlet dictionary and a trajectory , is a subset of so that can be represented by concatenating ∈ .This process is denoted by = ( ).Furthermore, the representation cost (, ) refers to the minimal number of elements required to represent , which is defined as: (, ) = min Problem definition.The goal is to find an optimal dictionary that minimizes the following two objectives at the same time: 1) the size of the dictionary, as a smaller dictionary contains less redundant information and is therefore more desirable.2) the average number of elements required to reconstruct trajectories.We use hyperparameter to control the trade-off between these two objectives.Therefore, similar to [7], in this paper the pathlet dictionary learning problem is defined as: METHODOLOGY 3.1 Problem Formulation To formulate the problem defined above using vector notations, we introduce three matrices ) refers to the maximum value of th row of , which is equal to 1 if any trajectory utilizes to represent itself.In other words, ( ,: ) = 1 means that candidate pathlet is selected as an element of the dictionary.We reuse the notation to represent the matrix form of the dictionary, which is a submatrix formed by selected columns of , = [:, { | ( , :) = 1}] and therefore () = |𝑃 | =1 ( ,: ).The constraint = corresponds to the setting that each trajectory should be reconstructed using pathlets.In this optimization problem, the dictionary and the assignment relationship will be optimized at the same time.It is worth noting that the pathlet learning problem described above is NP-hard in most cases.Therefore, an effective algorithm is required to obtain good approximated solutions. Pathlet Dictionary Learning with Randomized Rounding The proposed algorithm consists of two main steps.Firstly, we relax the binary constraint, which transforms the original optimization problem into a convex optimization problem.Therefore, the global optimal solution * can be found easily by the projected gradient descent algorithm.Then a randomized rounding step is carried out to obtain the final solution .The whole procedure is shown in the following pseudocode of Algorithm 1. Algorithm clip the result to make sure 0 Probabilistic Bound.Given constant matrices (, ) and hyperparameters (, ) , We claim that the final solution satisfies In practice, | | can be quite large.We pre-filter out less frequently used candidates to alleviate computational burden.Please refer to Appendix B for details. This inequality means that the probablity that a solution with low cost can be found and all trajectories will be covered at the same time is lower bounded by a positive constant.Therefore, we can repeat the randomized rounding process to get a series of { 1 , 2 ...} until find a satisfactory solution.The proof can be found in appendix part A. Hierarchical Pathlet Learning Candidate pathlet space consists of all segments of trajectories from dataset, whose size is usually huge in real-world dataset and make it time-consuming to get the solution.On the other hand, Multi-scale dictionaries of pathlets and trajectory representations can help people gain a deeper understanding of traffic characteristic.To enhance the scalability of the original algorithm, we introduce a hierarchical method called "pathlet of pathlets" to reduce the computation complexity and generate multi-scale trajectory representations. Specificly, we first partition the roadmap into different levels of granularity using axis-aligned binary space partitioning.Starting from the bottom of the partition tree, we compute the -th level pathlet dictionary as the union of dictionaries computed in all -th level cells.Next, we use the -th level pathlet representation of each trajectory as the input, and compute the ( − 1)-th level pathlet dictionaries.This iterative process can be repeated to generate multi-scale pathlets that capture movement patterns. Representing New Trajectories Once we obtain a set of dictionaries at multiple scales, we can use them together in representing new trajectories.We define a unified dictionary matrix ′ by concatenating the dictionary matrices by column.The size of dictionary ′ is therefore equal to the number of columns of ′ .For any new trajectory, it can be mapped to a new representation space using ′ .To be specific, representation vector is obtained by solving: min Here represents the vector recording the edges covered by a new trajectory, and denotes the representation vector that we aim to solve for.This problem can be viewed as a simplified version of the original problem because the dictionary is fixed at this moment.We solve it using the same strategy described before: first compute the optimal fractional solution * using gradient descent and then round it to get the final binary solution.in Shenzhen [8] and Porto [9].Our research largely follows the problem formulation described in [7] but we adopt different formulation and method.In that paper, the authors first relaxed ( ,: ) to , , and then solved it using dynamic programming, which is simple and effective.However, this relaxation operation resulted in an redundant dictionary, providing us with room for improvement especially when is small. EXPERIMENTS 4.1 Numerical Performance In this experiment, hyperparameter and are set as 0.1 and 1 4 (2| |) respectively, and we only randomly sample 3 times using strategy described before.As is shown in Table 1, our approach generates a more compact and effective dictionary compared to dynamic programming methods, reducing the dictionary size by 43.01% and 36.36%respectively on two datasets and the representation cost is relatively lower.At the same time, it is observed that the cover ratio is very close to 1, here is set as 1 4 (2| |) instead of (4| |) because in the experiment we found that the method can still produce a feasible solution with low cost within 3 random sampling cycles, which further validates the effectiveness of previously derived probability bound. 4.1.2Reconstruction using Multi-scale Dictionary.The hierarchical framework enables us to learn multi-level pathlet dictionaries on arbitrarily large maps and datasets with limited computational resources.In this section, we validate the above statement by comparing the performance of the dictionary directly learned on the whole map (denoted using ) and the dictionaries generated by hierarchical framework on test data.Specifically, we randomly selected 10,000 trajectories from the Futian district as train set to learn the dictionary and tested it on another 10,000 trajectories.In Table 2, 2 represents dictionaries learned on regions of the 2-th layer and 1 + 2 refers to multi-scale dictionaries.The performance of can be considered as ground truth to some extent, although it comes with significant computational resource consumption.We can observe that compared to only using 2 , the reconstruction cost is much lower when using the multi-scale dictionary.The performance of the multi-scale dictionary is closer to that of , but consumes only 54% of the GPU memory resources compared to the training of and the computation time is reduced by 20%. Visualization of Pathlet Dictionary Some frequent pathlets are visualized in Figure 3 to intuitively verify whether the algorithm finds common mobility patterns or not.For example, Figure 3 (c) is a pathlet corresponding to turning left on the overpass.Figure 3 (e) depicts Praça Mouzinho de Albuquerque, which is one of the famous attractions in Porto.These pathlets have semantic meaning consistent with our cognition in life and reveal common mobility patterns shared by numerous trajectories. Application in Trip Time Prediction. To demonstrate the effectiveness and usability of the representation vector, we utilize a simple GBDT model to predict trajectory time whose input is the combination of trajectory embedding vector and the time encoding vector.We use mean absolute error (MAE) between the predicted result and the ground truth (in seconds) as the metric to train the simple GBDT model.The performance of all evaluated models are summarized in the table 3. It can be observed that our proposed algorithm ensures explainability of the results without compromising accuracy.One possible reason why our method outperforms others is that our vectors are naturally sparse, which makes it more robust on the test set and easier to train the model.This demonstrates the simplicity and effectiveness of our method, as well as its broad prospects in the field of application.Learning a dictionary and reconstructing trajectories using elements from this dictionary can also be considered as a process of data compression.In [4] the authors described an evaluation method based on Minimum Description Length (MDL) to measure the compression performance: Here L (.) refers to the size of a data collection in bits.D and C are used to denote the dataset and the corridor set, a concept similar to pathlets.D | C refers to the representation of the original trajectory using corridor.Compared to the previous method's score of 0.27 reported in [4], our method achieved a score of 0.21.One possible reason is our objective function and MDL score are consistent, whereas method in [4] based on LDA does not optimize the MDL score explicitly.This experiment indicates that transforming trajectories into pathlets form can effectively compress data, facilitating easier storage and transmission. CONCLUSION AND FUTURE WORK In this study, we reformulated the problem of pathlet learning from a collection of trajectories and solved it using a novel dictionary learning based method, resulting in a hierarchical and explainable representation of trajectories with theoretical probability bound.We tested our algorithm on two large-scale datasets.The output dictionary of pathlets provides us with deeper insight into mobility patterns.We also demonstrate how the pathlet could benefit downstream tasks such as trip time estimation and trajectory compression.In future work, we will adapt our algorithm to represent trajectories in other domains, improve the numerical optimization, and further advance the theoretical analysis. A PROOF OF PROBABILISTIC BOUND.=1 , , is an integer greater than or equal to 0. If , = 0, then constraint will be satisfied automatically; For each element , = 1 in , the probability that constraint on , is not satisfied is: (which means the possibility that is not covered for a ∈ ) , = 0) (5) For ≥ 0, we have 1 − (, 1) ≤ − , therefore We have Based on Lemma1, we have Therefore, On the other hand, Therefore, Therefore, E[ ( )] < +1 ( * ) Step 3 of proof.Based on the Markov inequality, we have Assume that constraint is not satisfied or ( ) > 2 +1 ( * ) are bad events, by the naive union bound, the probability that one of these two bad events happens is less than 1/2 + | |exp(− ).Thus, if ≥ (2| |), with positive probability there are no bad events happen and the cost of the final solution is at most 2 +1 ( * ). B EXPERIMENTS SETUP B.1 Dataset The following describes the trajectory datasets used in our study and some key statistics of datasets are summarized in Table 4. Shenzhen.Zhang et al. [8] released this dataset containing approximately 510k dense trajectories generated by 14k taxi cabs in Shenzhen, China, which can be downloaded at [13]. Porto.This dataset describes trajectories performed by 442 taxis running in the city of Porto, Portugal [9].Each taxi reports its location every 15s.This dataset is used for the Trajectory Prediction Challenge@ ECML/PKDD 2015. Figure 4 displays the spatial distribution of two datasets, it can be observed that there is a significant spatial imbalance in the distribution of trajectories.In our experiments, we focus on densely populated areas.To implement our hierarchical algorithm, we first need to partition the map into smaller regions.Specifically, For the Porto dataset, we select a 15.3km x 13.5km area in the city center and divide it into six regions.Similarly, for the Shenzhen dataset, we choose the city center area encompassing Nanshan, Futian, Bao'an, and Luohu districts, and divide it into 32 grids. For these two datasets, We remove trajectories with less than 20 GPS sample points and use the method proposed in [14] to convert trajectory to a series of edges on roadmap.Then the matrices and are generated as described in Section 4. B.2 Evaluation Protocols and Platform We randomly sampled 30% trajectories as our test dataset, and use the rest 70% as the training dataset.We evaluate the quality of a resulting pathlet dictionary from the following aspects: • Size of the pathlet dictionary, i.e., the number of pathlets in the pathlet dictionary.This characterizes the compactness of a pathlet dictionary.• Representation cost, i.e., the average number of pathlets used to reconstruct a trajectory, which measures the efficiency of using pathlet dictionaries to explain trajectories.• Coverage Ratio, this measures whether the dictionary can cover the possible trajectories as comprehensively as possible.Our method is implemented in Python and trained using a Nvidia A40 GPU.All experiments are run on the Ubuntu 20.04 operating system with an Intel Xeon Gold 6330 CPU. B.3 Pre-filtering Method In real-world scenarios, the number of candidate pathlets | | is quite large, meaning that the size of the matrices and is huge.Consequently, this poses significant challenge for both computation and storage.At the same time, the pathlets we aim to identify are shared mobility patterns among multiple trajectories.Therefore, we can proactively filter out infrequent candidate pathlets without significantly impacting the results.Specifically, for each candidate pathlet , we traverse the trajectory dataset to count the number of trajectories that pass through , denoted as .Then a threshold is set and only those candidate pathlets whose corresponding count exceeds this threshold are retained.A filtered candidate set ′ = { | ∈ , > } can be obtained. In our implementation, was set as 3. To evaluate the effect of adopting pre-filtering, we randomly selected 10,000 trajectories from the Futian district and the result is shown in Table 5.We can observe that the filtering operation significantly reduces GPU memory usage and computation time, while not significantly affecting the loss. B.4 Implementation Details for Trip Time Prediction Task The prediction of travel time is a regression task aimed at forecasting the duration of a trajectory's journey and the result are often strongly correlated with both the chosen route and the departure time.Our approach for encoding departure time in travel time prediction tasks is inspired by the positional encoding mechanism proposed in [15].Specifically, we encode this information using sine and cosine functions and concatenate it with the trajectory representation vector as input for the GBDT model.Given the departure time , the formula for the time encoding vector is as follows: where .ℎand .refer to corresponding hour and minute of respectively.To demonstrate the effectiveness and usability of the representation vector, we utilize a simple GBDT model to predict trajectory time whose input is the combination of trajectory embedding vector and the time encoding vector.We use mean absolute error (MAE) between the predicted result and the ground truth (in seconds) as the metric to train the simple GBDT model.The workflow is illustrated in Figure 5.The key parameters of GBDT are set as follows: tree max depth is set as 5; number of estimators is set as 100. To maintain fairness in the comparison, we also generated a short trip version dataset from original Porto dataset following the sampling method described in [12].We conducted algorithm testing on both of these datasets and the metrics include MAE, MAPE, RMSE and RMSLE. C SUPPLEMENTARY EXPERIMENTAL RESULTS C.1 Effect of 𝜆 We conducted the algorithm under different .It can be observed from Figure 6 that when increases, the average number of pathlets need to construct a trajectory will decrease.At the same time, the size of the dictionary increases, which means that the algorithm prefers a more compact dictionary with a smaller . C.2 Visualization of Hierarchical Pathlets After obtaining the local pathlet dictionary, we can generate highlevel pathlets based on the previous result using the method described in Section 3.3.Figure 7 shows pathlets of different levels.The three rows correspond to pathlets at different levels.Since higherlevel pathlets are generated by concatenating lower-level pathlets, we can mine long-distance movement patterns from higher-level pathlets. C.3 Partial Reconstruction The pathlet dictionary for previous evaluation is a complete dictionary that can reconstruct every trajectory in previous results.However, if we accept a small portion of trajectories that are not rebuilt, the size of the pathlet dictionary can be significantly reduced.As shown in Figure 8, the ratio of uncover (the proportion of edges that can not be covered using pathlets from dictionary) decreases rapidly and drops below 5% when only 50% of most frequent pathlets are preserved, which means the majority of trajectories still can be reconstructed using half of the pathlet dictionary.On the other hand, it can be observed that each trajectory needs more pathlets to reconstruct itself when part of pathlets are preserved compared to representing using the complete dictionary, which reveals that there is a trade-off between redundancy and efficiency.In [7] the authors found the challenge of problem solving in scenarios involving large-scale datasets.To address this challenge, they proposed modifying the objective function, allowing the problem to be solved independently for each trajectory.This approach significantly reduced the complexity of the problem-solving process.Specifically, they transformed the original problem into solving a The primary distinction here is the substitution of variable with , .However, for a specific ∈ , it is quite common that some ∈ () do not use to represent themselves.Consequently, there exists a considerable disparity between the solution obtained by this approach and the optimal solution.Especially when lambda is small, the size of a dictionary is a crucial factor in gauging its overall quality. Figure 1 : Figure 1: Illustration of pathlet learning: A pathlet dictionary is learned from dataset and each trajectory can be represented by concatenating pathlets chosen from this dictionary. Figure 2 : Figure 2: Illustration of the hierchical pathlet representation.Here refers to the -th region of the -th layer. 4. 1 . 1 The Performance Comparison with Previous Work.The proposed method is evaluated on two datasets collected separately Due to the space limit, details of experiments setup can be found in appendix part B. Figure 4 : Figure 4: The visualization of the real-world datasets and trajectories in L1 grid. Figure 5 : Figure 5: Trip time estimation using representation vector. Figure 8 : Figure 8: The representation cost and uncover ratio when varying the size of dictionary. , , to record the cover relationship among trajectories, edges, and candidate pathlets respectively.Matrix has dimensions of || by | |, where each element , is equal to 1 when the i-th trajectory passes through the j-th edge and 0 otherwise.Matrix with a size of || × | | is constructed in the same way for the relationship between all candidate pathlets and edges.Similarly, matrix is a | | × | | decision matrix, each entry , = 1 if is used to represent , and , = 0 otherwise. Table 1 : The performance comparison with previous work. Table 2 : The performance using different dictionaries.GPU memory here refers to the size of GPU memory needed for training instead of storage of dictionary. * Table 3 : The performance comparison with previous work. Bernoulli random variable which means the maximum of set ,: . Table 5 : The effect of adopting pre-filtering.
5,542.6
2023-11-13T00:00:00.000
[ "Computer Science", "Engineering" ]
Green finance growth prediction model based on time-series conditional generative adversarial networks Climate change mitigation necessitates increased investment in green sectors. This study proposes a methodology to predict green finance growth across various countries, aiming to encourage such investments. Our approach leverages time-series Conditional Generative Adversarial Networks (CT-GANs) for data augmentation and Nonlinear Autoregressive Neural Networks (NARNNs) for prediction. The green finance growth predicting model was applied to datasets collected from forty countries across five continents. The Augmented Dickey-Fuller (ADF) test confirmed the non-stationary nature of the data, supporting the use of Nonlinear Autoregressive Neural Networks (NARNNs). CT-GANs were then employed to augment the data for improved prediction accuracy. Results demonstrate the effectiveness of the proposed model. NARNNs trained with CT-GAN augmented data achieved superior performance across all regions, with R-squared (R2) values of 98.8%, 96.6%, and 99% for Europe, Asia, and other countries respectively. While the RMSE for Europe, Asia, and other countries are 1.26e+2, 2.16e+2, and 1.16e+2 respectively. Compared to a baseline NARNN model without augmentation, CT-GAN augmentation significantly improved both R2 and RMSE. The R2 values for the Europe, Asia, and other countries models are 96%, 73%, and 97.2%, respectively. The RMSE values for the Europe, Asia, and various countries models are 2.24e+2, 7e+2, and 2.07e+2, respectively. The Nonlinear Autoregressive Exogenous Neural Network (NARX-NN) exhibited significantly lower performance across Europe, Asia, and other countries with R2 values of 74%, 52%, and 86%, and RMSE values of 1.11e+2, 3.63e+2, and 1.8e+2, respectively. Introduction Global warming and climate change are considered as the biggest economic failures and challenging situations.Earth's atmosphere is witnessing a huge concentration of carbon dioxide, almost more than 420 parts per million (ppm) as per NASA's data [1].Accordingly, tackling the challenges of global warming and reducing air pollution falls not only on these nations but is a collective responsibility shared by all of humanity [2,3]. Since the 1960s, national and international policymakers, economists, and environmental activists have been more conscious of the damaging effects of environmental degradation on climate change.Subsequently, to promote economic development, numerous nations have put forth laws and policies to combat environmental deterioration.To guarantee a clean, safe, healthy, and productive environment, for example, Malaysia implemented the Environmental Quality Act in 1974 [4].Increasing economic growth is linked to increasing levels of environmental pollution to increase growth engines that depend on consumer and manufacturing activities to meet societal requirements, which in turn causes wasteful pollution and strains environmental resources [4]. There are some commitments made by many countries, including China, against climate change, and some of these pledges include developing the renewable energy industry and modernizing the energy system.Policymakers and authorities have made extensive efforts to make this a reality [5]. Accordingly, the need for green financing has developed to achieve long-term growth and sustainable development, as green financing is defined as financial investments aimed at sustainable development projects that protect the environment.Green finance has many types such as climate finance, industrial pollution control, water sanitation, and biodiversity protection.The main goal of green finance is to protect the environment by reducing or avoiding emissions of greenhouse gases (GHGs). For all the above, green finance is one of the most important areas of research.This concept has been widely addressed in Western countries that have had the greatest impact on the environment, such as China [6]. While artificial intelligence (AI) facilitates greater efficiency in marketing, creativity has been emphasized as the future of business.Existing theories and frameworks in the literature have failed to adequately explore the impact of AI on investment innovation [7]. This study aims to introduce a model for forecasting the expansion of green finance using time-series conditional generative adversarial networks.The proposed model employs artificial intelligence algorithms on publicly available data to construct a green finance recommendation system capable of forecasting the overall volume of green investments globally.Below are the principal contributions of this research: • Forecasting the growth of green finance worldwide were proposed • Incorporating CT-GAN to address data scarcity issues. • Utilizing a straightforward neural network (NAR-NN) suitable for the dataset's characteristics. Related work According to the studied literature, there are few research investigations that measure the impact of pollution on investments and capital flow [8].In these studies, researchers have usually relied on mathematical tools such as stochastic calculus [9], random processes, ARIMA time series regression [10], and GARCH volatility models [11] to detect the various time-series patterns.However, the value of financial assets is influenced by an array of factors spanning both financial and non-financial domains.Accordingly, this complexity renders traditional models inadequate. Authors in [12], discussed the importance of analyzing and forecasting carbon emissions, energy consumption, and the outputs for transitioning to a clean energy economy, especially in rapidly growing markets like China.The paper utilized a nonlinear grey Bernoulli model (NGBM) to predict these indicators and proposed a method to optimize its parameters.The results indicated that the forecasting ability of NGBM with optimized parameters (NGBM-OP) outperforms traditional models like GM and ARIMA, with Mean Absolute Percentage Errors (MAPEs) ranging from 1.10 to 6.26 for out-of-sample data (2004)(2005)(2006)(2007)(2008)(2009). The predictions also suggested that between 2011 and 2020, China's compound annual emissions are expected to grow by 4.47%, while energy consumption was forecasted to decrease slightly (-0.06%), and real GDP is expected to increase by 6.67%.Moreover, authors in [5] highlighted the strategic importance of developing renewable energy.Through a timeseries analysis, this research revealed that financial development contributed significantly, explaining 42.42% of the variation in renewable energy growth.Capital market development emerges as the most crucial factor, followed by foreign investment.A comparison with the EU and the US cases suggested that the EU's approach is more relevant and warrants careful study by Chinese policymakers. Furthermore, in [13], authors were developing the renewable energy sector and upgrading China's energy structure play pivotal roles in addressing climate change commitments.Financial issues emerge as a critical constraint, directly tied to the country's financial development. The study proved that financial development contributes significantly, with capital market development being the most crucial factor, followed by foreign investment, advocating for a closer examination of the EU's approach by Chinese policymakers. Three significant contributions have been presented in [14].First, the authors started by talking about the evolution of the financial well-being domain.Second, they put forth a theoretical framework that delineates the antecedents-based interventions that can be implemented in a particular socioeconomic context to achieve economic well-being.Third, a list of methodological and topical propositions was provided for future researchers and academics to review.They also developed ten future research agendas (FRAs) concerning financial well-being, addressing the need to examine diverse nations with diverse market structures. On the other side, Machine learning techniques enable investors to enhance financial assets prediction and forecast market strength more accurately than conventional methods.The advent of advanced computer technology such as deep neural networks, and Long Short-Term Memory (LSTM) networks, have prompted a shift toward capturing complex information impacting financial assets [15].LSTM networks excel at retaining long-term information, unlike traditional models.In addition, Convolutional Neural Networks (CNNs), were adopted to extract features and recognize local dependencies [16]. Combining CNN and LSTM, a model known as ConvLSTM2D was proposed in [17], this research proposed a regression and neural network technique to model stock prices alongside environmental factors, aiming to offer a more precise time series model for stock prices.The model incorporated the ConvLSTM2D network, which extracted all necessary information from air pollution data from major industrialized Chinese cities including Beijing, Taiyuan, Changchun, and Shijiazhuang.Furthermore, Bidirectional LSTM was used in [18] to in investigate how air pollutants indirectly influence investor sentiment and endeavors to establish a more comprehensive and effective stock price prediction framework.The study focused on the SSE Shanghai Enterprises (SSESHE) index and introduced six distinct air pollutants as crucial input parameters.The predictive model developed both, Bidirectional and Long Short-Term Memory (BiLSTM) to project stock closing prices.Additionally, the study compared the proposed model against Support Vector Regression (SVR), Long Short-Term Memory (LSTM), and Gate Recurrent Unit (GRU) models.The experiments concluded that the BiLSTM model that integrated air pollutant data in stock forecasting, achieved the highest prediction accuracy of 94.1%. According to the conducted literature review conducted, the impact of pollution on investments in green finance specifically has never been addressed, despite its importance in measuring the evolution of green finance over the years.Consequently, this research focuses on studying neural time series techniques that can evaluate the success of green finance across various time periods.To aid in analysis and forecasting issues within the tested dataset of investments in green finance across continents over the years, the nonlinear autoregressive neural network (NAR-NN) and NAR-NN have been explored. Economic growth of the studied countries In this paper, we analyze green finance from 40 different countries across 5 continents.Table 1 summarizes the financial status of these countries.The Gross Domestic Product (GDP) represents the total monetary value of all goods and services produced and sold within a country for one year.The global GDP is estimated to be $100,562,000,000,000.Among the countries studied, Tunisia stood out as the sole representative of Africa.Classified as an upper-middleincome country, Tunisia's Gross Domestic Product (GDP) grew at an annual rate of 3.5% in the pre-revolution period, from 2008 to 2010 [19].A research program outlined in [20] proposes a multilevel and multidisciplinary approach to financial system policy, aiming for environmental, social, and economic sustainability.The program leverages social sciences to teach students how financial tools can address economic, social, and environmental challenges.A key focus is on achieving the European Union's "Europe 2030" goals, which require an estimated annual investment of EUR 180 billion for the next 20 years, particularly in Central and Eastern Europe, to improve energy efficiency and reduce transport emissions.According to [21], the green bond market, a specific segment focused on climate-friendly projects, was launched in 2007-2008 with the help of the first offerings from Multilateral Development Banks.This market has seen a surge in participation from sub-national agencies, local development funds, and institutions like the World Bank, International Monetary Fund, and the European Investment Bank, particularly between 2007 and 2012. In Europe, Turkey stands out as an upper-middle-income country with a mixed-market emerging economy, reflecting its ongoing economic development and growth [22].Shifting to North America, Costa Rica, a Central American nation, is another upper-middle-income country that has witnessed steady economic expansion over the past 25 years [23].Canada, also in North America, boasts the world's ninth-largest economy and maintains strong trade partnerships with the United States, China, and the United Kingdom [24].Finally, in Asia, Japan reigns supreme as the world's third-largest economy.Moreover, Japan's position as the world's leading creditor nation grants it significant global influence with far-reaching economic implications [25]. GAN for data augmentation Machine learning algorithms often struggle with imbalanced datasets, where one class has significantly more samples than others.To address this challenge, we can leverage two techniques: Generative Adversarial Networks (GANs) and Synthetic Minority Over-sampling Technique (SMOTE).While SMOTE is a useful tool, it can create new samples too similar to the majority class, leading to overfitting and poor model performance.In contrast, GANs excel at learning the distribution of the minority class, generating more representative samples.Additionally, GANs offer a robust way to enrich existing data.These networks consist of two key components: a generator and a discriminator.The generator synthesizes new data points, while the discriminator attempts to distinguish real data from the generated samples.Through this adversarial process, the generator learns to create increasingly realistic synthetic data that fools the discriminator [26].Generative Adversarial Networks (GANs) offer an alternative to conventional augmentation techniques by generating synthetic samples resembling the minority class.GANs excel in learning the distribution of minority classes, resulting in the creation of diverse and realistic synthetic samples, surpassing the interpolation of existing data.Unlike traditional augmentation methods, which may lead to overfitting due to the replication of existing samples, GANs produce samples that deviate from the majority class.This enhances the model's ability to generalize effectively and accommodate new data instances [27].GAN training uses iterative optimization.The generator and discriminator are alternately updated using gradient descent to minimize loss functions.This makes the generator and discriminator compete throughout training.The game theory-inspired minimax loss function is the most frequent GAN loss function.Eq (1) calculates mini-max loss for a GAN with generator G and discriminator D [28]. Wherex represents real data samples drawn from the true data distribution pdata(x), z represents random noise (latent vector) drawn from a prior distribution pz(z)(often a uniform or normal distribution),G(z) is the output of the generator given the noise z generating synthetic samples, and D(x) is the discriminator's output, representing the probability thatx is representing. The generator minimizes this loss, while the discriminator maximizes it.After training, the generator produces more realistic data that confuses the discriminator, while the discriminator becomes better at distinguishing real from fake data.Conditional GANs for synthetic data generation, also known as CT-GAN, is a synthetic tabular data generator that was developed to solve several problems that were present in the classic GAN.CT-GAN exceeds every method that has been developed to this day and is at least 87.5% more effective than Bayesian networks [29]. Time series neural network This study employs two distinct types of Time Series Neural Networks which are the Nonlinear Autoregressive Exogenous Neural Network and the Nonlinear Autoregressive Neural Network.Subsequent sections will delve into detailed discussions of these networks. The anticipated time series s(t), is determined by the past value p and is influenced by an additional external time series, x(t).The external time series (t), might either have a single dimension or be multi-dimensional.The NARX-NN prediction model utilizes the previous output values along with exogenous input to estimate future values [31].In this paper, the use of green finance is considered as the input time series at time t−1, denoted as (t−1), while the nation variable is regarded as the exogenous input at time t−1, denoted as x(t−1).The sole resultant is denoted as y(t).The NARX-NN and NAR-NN exhibit significant similarities.The country variable serves as an exogenous input in the NARX model. 2) Nonlinear Autoregressive Neural Network (NAR-NN) Linear mathematical models struggle to capture the complexities of real-world economic scenarios, particularly in forecasting the growth of green finance.These complexities often involve numerous challenges and random fluctuations.To address this limitation, a nonlinear model, as represented by Eq (3), is necessary to predict the magnitude of these fluctuations in green finance growth.One such powerful tool for nonlinear time series forecasting is the Nonlinear AutoRegressive Neural Network (NARNN) described in [32]. yðtÞ ¼ f ðyðt À 1Þ; yðt À 2Þ; yðt À 3Þ ... ; yðt À nÞÞþ 2 ðtÞ ð3Þ In this case, y is the green finance data series at a time t, n is the green finance data series input delay, and f is a transfer function.The neural network is trained to learn the underlying function.This is achieved by adjusting the weights of connections between neurons and the biases of individual neurons to minimize the difference between the network's predictions and the actual function's outputs.The y-series of green finance was found by getting close to the term (t), 2 which stands for "error tolerance." The following is a way to describe NARNN's endogenous input. where delay of input n = 20.NAR-NN consists of one input layer, one or more hidden layer(s), and one output layer. NARNN is recurrent and dynamic due to the connection of feedback.In this study, we used the narnet() built-in function for NAR-NN to implement the hyperbolic tangent (tansig, ( 5)) and sigmoid (logsig, ( 6)) functions to compare the network accuracies in the context of green finance forecasting. The Augmented Dickey-Fuller test (ADF) The Augmented Dickey-Fuller test (ADF) falls under the category of statistical tests known as unit root tests.Certain stochastic processes, like random walks, possess unit roots, which can complicate statistical inference when utilizing time series models.A unit root indicates nonstationarity and doesn't always exhibit a trend [33].The ADF test is an 'augmented' version of the Dickey Fuller test, it allows for higher-order autoregressive processes by including Dg tÀ p ADF tests yield statistics and p-values.At 1%, 5%, and 10% significance levels, the test statistic is compared to important values.Decide whether to reject the null hypothesis and declare the time series stationary if the test statistic is less than a predetermined number.As a result, you cannot rule out the null hypothesis, which suggests that there is a unit root in the time series if the test statistic is less negative than this crucial value.The p-value indicates the probability that a test statistic will be obtained that is equally or more extreme than the null hypothesis that was observed.Reject the null hypothesis and, if the p-value is less than the predetermined significance level, conclude that the time series exhibits stationarity.On the contrary, the null hypothesis cannot be rejected if the p-value surpasses the predetermined significance level; this would suggest the existence of a unit root in the time series [33]. The proposed prediction model architecture A generic preview of the proposed model architecture is presented in Fig 1 .It consists of three main phases: data preparation, data augmentation using CT-GAN, and prediction phase using time series network NAR.Algorithm 1 presents the prediction model algorithm, and the next sections present these phases in detail. Algorithm 1: green finance prediction model 1.Read the dataset. Data aggregation group by continent 3. Perform ADF test to select the appropriate prediction model 4. For each continent's countries (3 continent) • Generate a fake data from real data using a generator and discriminator models that calculates minimax loss for a GAN with generator G and discriminator D L GAN ðG: Data preparation This phase is crucial in readying the data for analysis.It involves two key processes: selecting and aggregating data and conducting statistical analyses to identify the most appropriate prediction model. Data selection and aggregation.The studied data set includes data green finance data from 40 countries across 5 continents spanning several years, obtained from [34].Table 2 provides a sample of the data for Denmark, a European nation.While the data included entries from various continents, Europe and Asia had the most comprehensive coverage.Preprocessing was necessary due to the presence of categorical data (shown in Table 3).Additionally, data for different countries were scattered throughout the dataset.To address this, we implemented a two-step organization process.The first step consists of identifying the continent for each country and grouped them into separate files.This analysis revealed that only Europe and Asia had sufficient data for further analysis.The second step is related to data transformation where categorical data is transformed into numerical values. Preliminary experiments indicated the need for data augmentation.Consequently, we employed CT-GAN (likely referring to Conditional Generative Adversarial Network) to augment the dataset as the final preprocessing step.The visualizations of green finance growth in Europe and Asia are presented in Figs 2-4. Statistical analysis.Time-series data is valuable for analysis and prediction because it captures trends and patterns that change over time.However, stationary data, which exhibits little change over time, lacks these patterns and isn't ideal for forecasting.Therefore, it's crucial to assess data stationarity before proceeding. To analyze stationarity in our green finance growth data for Europe, Asia, and various countries, we first visualized it.Fig 5(A)-5(C) display the plots for each region, respectively.These visualizations suggest that the data might be non-stationary.To confirm our suspicions, we will employ the Augmented Dickey-Fuller statistical test, a robust method for detecting stationarity [33] The Augmented Dickey-Fuller (ADF) test results, presented in Table 4, reveal that the green finance growth data across all regions (Europe, Asia, and Other Countries) exhibits non-stationary characteristics.This implies that the data lacks consistent trends or patterns over time. For each category, the test statistic is higher than the critical values at various significance levels, and the corresponding p-values all exceed the chosen significance level of 0.05.In statistical terms, these results fail to reject the null hypothesis of non-stationarity.Consequently, the green finance growth data cannot be directly used for traditional forecasting methods that rely on stationary data. Hence, to effectively predict future green finance growth patterns, this study proposes using a Nonlinear AutoRegressive Neural Network (NARNN) model.This type of model is wellsuited for analyzing and predicting non-stationary time-series data. Experimental results and analysis To optimize the network's performance, we employed an iterative approach, evaluating different configurations through multiple tests.The most accurate results were achieved with a single hidden layer containing 20 neurons.We opted for the Levenberg-Marquardt Backpropagation (LMBP) algorithm for training due to its efficiency [35]. Since our goal was one-step-ahead forecasting, a simpler architecture was chosen compared to the typical closed-loop structure used for multi-step predictions.The effectiveness of the final three network configurations was assessed using Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and R-Squared (R 2 ).MSE is a common metric in regression tasks.It measures the average squared difference between predicted values and actual targets (Eq 8).It's important to note that MSE tends to inflate the impact of small errors due to the squaring, potentially overstating the model's shortcomings [11]. To assess the prediction accuracy, N represents the total number of test samples, where y i denotes the ith test sample, and ŷ stands for the predicted value of y i .MSE serves as an indicator of the precision of the forecasting results, with a smaller MSE indicating a more accurate forecast. As shown in Eq 9, the Root Mean Squared Error (RMSE) is utilized to compute the discrepancy between the actual and observed values. RMSE ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi Where N is the number of test samples that subscribe to the i th test sample, and ŷi is the predicted value of y i . Because RMSE uses the average error, it is susceptible to aberrant points.The RMSE value is greatly affected if the regression value of a point is not credible, since this will result in a relatively large error.The more accurate the predicted results, the smaller the RMSE.Moving on to R-square (R2), its primary objective is to measure the degree of correlation between predicted and observed data.Consider a dataset comprising n values labeled y d2 , y d3 , . .., y n (often denoted as y i or represented as a vector y = (y1, y2,. .., yn) T , each corresponding to a predicted value f 1 , f 2 ,. .., f n .To compute both the total sum of squares and the sum of squares remaining, employ Eq (10) and Eq (11) as follows: The total of all squares: The sum of residual squares is another name occasionally used to refer to the sum of squares. Experiment (3): Predicting the green finance using NAR-NN for the original data As a result of the disappointing outcome of the prior experiment, it has been determined to make use of the NAR-NN model because it will be appropriate for the characteristics of the data.NAR-NN was utilized to make predictions on the impact that carbon dioxide emissions and pollution have on investment and capital flows.The experiment was carried out on the primary dataset once the preparation has been completed without data augmentation.The experiment on making predictions was carried out for several countries, including those in Europe, Asia, and other regions.test was performed in this experiment, and the findings showed that it was acceptable; the results were 96%, 73%, and 97%, respectively, for Europe, Asia, and other regions. Experiment (4) Predicting the green finance using NAR-NN for the original data The results of the previous experiment were acceptable.However, they were not satisfactory enough; this could be due to the limited amount of data that was trained on.In this experiment, we use data augmented with CT-GAN, and the NAR-NN model will be applied for prediction.The experiment on making predictions was conducted out for a variety of countries, including those in Europe, Asia, and other regions of the world.The performance of the model is depicted in Fig 9 for each of the three different categories.The R 2 test was carried out, and the results were successful with the values 98.8%, 96.6%, and 99% for Europe, Asia, and other regions respectively. Results analysis The findings of all of the experiments are presented in Table 5.The model used for the Asian countries is comparable to the approach used with the European countries.The R 2 value of the NAR-NN model without data augmentation is higher than the R 2 value of the NARX-NN model without data augmentation.Nevertheless, the implementation of data augmentation has led to reduced training results for the NAR-NN model compared to when data augmentation was not used.However, the NAR-NN model, when combined with CT-GAN data augmentation, achieves the highest R 2 result during both the test and validation phases. Regarding R 2 final results for training, validation, and testing, the NAR-NN model augmented with CT-GAN data yields the highest values for the models of different countries are 99.2%, 98.9%, and 99%, respectively. Based on the analysis of the results, the NAR-NN model outperforms other models in all three continents.This is consistent with our previous statistical analysis, which recommended that the NAR-NN model is the most suitable for prediction.The proposed CT-GAN also had a significant positive impact on enhancing the results. The proposed model enhances market confidence by providing reliable forecasts of green finance growth, reducing uncertainty, and attracting more investment.This research contributes to economic resilience by diversifying economic portfolios, creating new job opportunities, and stimulating technological advancements.The insights can inform evidence-based policies to accelerate the transition to a sustainable economy, such as targeted incentives and subsidies.Green finance also yields societal benefits, such as mitigating environmental degradation and improving public health.By aligning financial interests with environmental objectives, this research contributes to sustainable development and a prosperous future. Conclusion and future work Climate change, driven by rising atmospheric carbon dioxide levels, poses a significant environmental threat.In response, environmentally responsible finance, or "green finance," has emerged as a critical tool.While research on weather and stock prices remains limited, the link between air pollution and financial markets is gaining recognition.Machine learning techniques, particularly time series neural networks, offer more promising forecasting abilities compared to traditional models in financial analysis. This study aims to predict the future trajectory of green finance and encourage investments in green projects.Notably, the relationship between pollution levels and green finance investments has not been extensively explored.To the best of our knowledge, the relationship between pollution and investments in green finance, particularly, has not been addressed in the literature. Fig 3 . Fig 3. Asian green finance growth.https://doi.org/10.1371/journal.pone.0306874.g003 Fig 8 depicts the performance of the model for each of the three categories.The R 2 Fig 7 .Fig 8 . Fig 7. Experiment (2) Prediction using NARX-NN.https://doi.org/10.1371/journal.pone.0306874.g007 While the training R 2 result for the NAR-NN model without data augmentation is superior to the R 2 result of the NARX-NN model without data augmentation, the training results after the data augmentation are lower than the NAR-NN model without data augmentation.This is the case for the model used in European countries.On the other hand, the test and validation R 2 results for the NAR-NN model with CT-GAN data augmentation yield the greatest results 98.7% and 98.8% respectively.
6,308.4
2024-07-24T00:00:00.000
[ "Environmental Science", "Economics", "Computer Science" ]
Study on the Wetting Mechanism between Hot-Melt Nano Glass Powder and Different Substrates The wettability of molten glass powder plays an essential role in the encapsulation of microelectromechanical system (MEMS) devices with glass paste as an intermediate layer. In this study, we first investigated the flow process of nano glass powder melted at a high temperature by simulation in COMSOL. Both the influence of the different viscosity of hot-melt glass on its wettability on SiO2 and the comparison of the wettability of hot-melt glass on Au metal lead and SiO2 were investigated by simulation. Then, in the experiment, the hot-melt glass flew and spread along the length of the Au electrode because of a good wettability, resulting in little coverage of the hot-melt glass on the Au electrode, with a height of only 500 nm. In order to reduce the wettability of the glass paste on the Au electrode, a SiO2 isolation layer was grown on the surface of golden lead by chemical vapor deposition. It successfully reduced the wettability, so the thickness of the hot-melt glass was increased to 1.95 μm. This proved once again that the wettability of hot-melt glass on Au was better. Introduction Good vacuum packaging [1], even special packaging in a bad environment [2], is an important means to ensure the reliability of MEMS devices. In the middle layer packaging process with nano glass powder, the MEMS sensor can be electrically interconnected with the outside world through the external lead wire of the metal electrode, and the cap, substrate and lead wire can be tightly sealed together by using nano glass powder through hot press bonding. Nano glass powder or the glass frit inter-layer packaging has the advantages of a high tolerance to the surface roughness of the bonding interface, suitable for various materials in the MEMS, electrical insulation characteristics to simplify the electrode lead extraction process and patterning without an additional lithography process by using screen printing [3][4][5]. It has been widely used in the packaging of the MEMS pressure switch [6,7], MEMS gyroscope [8] and accelerometer [9]. Many scholars only describe the packaging principle, packaging process and packaging results of nano glass powder, but there is no report on both the mechanism of infiltration and flow process of hot-melt glass on the substrate. After nano glass powder is made on the glass substrate with metal lead through screen printing, during the process of high temperature melting, the wettability of molten nano glass powder on metal lead and the SiO 2 substrate are different due to a different contact angle, surface tension and adhesion work. After cooling and solidification, the adhesion thickness of the glass powder on the metal lead is different from that on the SiO 2 substrate. If the height of the glass powder inter-layer on the Au metal lead is less than 10 µm [10], the package will fail. In order to improve the results of the direct packaging of nano glass powder in the MEMS structure with metal leads, the wettability of nano glass powder in a hot-melt state was investigated. Firstly, the whole flow process of hot-melt nano glass liquid on silver substrate from the starting point to the material interface wall was simulated by COMSOL. 2 of 9 Then, the wettability of the hot-melt nano glass powder with a different viscosity on the SiO 2 substrate was analyzed and compared by simulation. The wetting effect of hot-melt glass with the same viscosity on SiO 2 and Au substrates were also investigated. Finally, it was verified by experiments that the wettability of hot-melt nano glass on Au metal leads was better than that on SiO 2 , which leads to a too small adhesion thickness. By depositing a SiO 2 isolation layer on the metal leads, the wettability of hot-melt nano glass on a Au metal lead was successfully reduced, so as to improve its adhesion thickness on the Au. Simulation Analysis of Wettability Wettability is the degree of difficulty for a liquid to adhere to a solid when it contacts with a solid. It is usually determined by the contact angle between the solid-liquid interface and the liquid-gas interface θ. When the contact angle is less than 90 • , the liquid can wet the solid. When the contact angle is greater than 90 • , the liquid is difficult to wet the solid. Zhu Dingyi et al. [11,12] studied the corresponding relationship between liquid surface tension, solid surface tension and the contact angle. Guan C.H. [13] researched the impact of surface roughness on solid-liquid wettability. Li Wei [14] obtained the contact angle between the hot-melt glass and different substrates through experiments, and the better wettability was attained by polishing the surface of the material. In reference [15], the adhesion work was calculated by measuring the contact angle. The viscosity µ of liquid affected the velocity difference of each layer in the flow, which was one of the key factors affecting the fluidity of the liquid. Reference [16] verified that viscosity µ directly affected the fluidity of the hot-melt alloy liquid, and 1/µ was used to characterize the relationship between the wettability and the temperature of the hot-melt alloy. However, the simulations of the wettability of liquids with a different viscosity on the same substrate and liquids with the same viscosity on different substrates have not been reported. Simulation Model The hot-melt glass powder was filled into a silicon pit sputtered with a layer of different substrate materials and heated to reflow to fill the whole pit. Assuming that the bottom radius of the hot-melt glass column was 2 mm and the height was 5 mm, the radius of the sphere equal to its volume was 2.47 mm. Taking the bottom radius of the cylindrical container made of the base material as 3 mm, we got the simulation model as shown in Figure 1. The material properties of nano glass powder at room temperature were indicated in The material properties of nano glass powder at room temperature were indicated in Table 1: density, 2.221g/cm 3 ; viscosity, 1000 Pa·s; and surface tension, 2003.4 mN/m. According to the relationship between the surface tension and the temperature in Reference [17], the data in Table 2 were preliminarily sorted out and calculated. As shown in Figure 2, the relationship between the contact angle θ and the interfacial tension between solid, liquid and gas can be expressed by "Young's formula". γ sg , γ sl and γ lg represent solid-gas interfacial tension, solid-liquid interfacial tension and liquid-gas interfacial tension, respectively. The corresponding relationship between the liquid surface tension, solid surface tension and contact angle [18,19] was expressed by Equation (2): According to the data of hot-melted glass in Table 1 and the surface tension data of the substrate material in Table 2, the contact angle formed when the substrate material and the hot-melted glass were infiltrated and could be calculated by Formula (2). The liquid-gas surface tension γ lg = 2003.4 mN/m. At the same time, Equation (2) was transformed as follows: (√1 + sin 2 θ + cos θ) 2 According to Equation (2), when the contact angle is 90°, the solid surface tension is 1416.6 mN/m. Thus, to consider the positive and negative values of cos θ and convert further: The corresponding relationship between the liquid surface tension, solid surface tension and contact angle [18,19] was expressed by Equation (2): According to the data of hot-melted glass in Table 1 and the surface tension data of the substrate material in Table 2, the contact angle formed when the substrate material and the hot-melted glass were infiltrated and could be calculated by Formula (2). The liquid-gas surface tension γ lg = 2003.4 mN/m. At the same time, Equation (2) was transformed as follows: Micromachines 2022, 13, 1683 According to Equation (2), when the contact angle is 90 • , the solid surface tension is 1416.6 mN/m. Thus, to consider the positive and negative values of cos θ and convert further: According to Equations (4) and (5), the contact angles between each substrate and hot-melted glass could be obtained from the data in Tables 1 and 2. Adhesion work is the energy released in the process of adhesion. In the process of adhesion, the surface energy of the solid and liquid is lost, and the surface energy of the solid-liquid interface is generated. The calculation formula of the adhesion work was as follows: Combined with "Young's formula" (1), we could obtain: According to Formula (8), the adhesion work between hot-melted glass and different substrates could be obtained. The contact angle and the adhesion work which were calculated are shown in Table 3. Table 3. Contact angle and adhesion work between hot-melt glass and each substrate [14]. Simulation of Wettability of Hot-Melt Glass with Different Viscosity on SiO 2 Substrate By changing the viscosity of hot-melt glass from 500 Pa·s to 1000 Pa·s, the influence of the viscosity of the hot-melt glass on the flow velocity and wettability of the hot-melt glass was studied with the SiO 2 as a substrate. Taking the yellow light band as the reference point, the relationship between the viscosity of hot-melt glass and the time needed to flow to the junction of the material bottom and the material wall was explored in this paper. On the SiO 2 substrate, the steady state of hot-melt glass with a different viscosity flowing to the junction is shown in Figure 3, which corresponds to a different flow time. Therefore, when the viscosity of the hot-melt glass was 1000, 900, 800, 700, 600 and 500 Pa·s, respectively, the time of the hot-melt glass flowing to the specified distance on the SiO 2 substrate could also be obtained, as shown in Figure 4. It could be seen that the lower the viscosity of the hot-melt glass, the shorter the flow time to the specified distance, the higher the flow speed and the better the wettability. For the same SiO 2 substrate, the solid surface energy of hot-melt glass with a different viscosity was the same, but the lower the viscosity was, the higher the wettability was. By changing the viscosity of hot-melt glass from 500 Pa·s to 1000 Pa·s, the influence of the viscosity of the hot-melt glass on the flow velocity and wettability of the hot-melt glass was studied with the SiO2 as a substrate. Taking the yellow light band as the reference point, the relationship between the viscosity of hot-melt glass and the time needed to flow to the junction of the material bottom and the material wall was explored in this paper. On the SiO2 substrate, the steady state of hot-melt glass with a different viscosity flowing to the junction is shown in Figure 3, which corresponds to a different flow time. Therefore, when the viscosity of the hot-melt glass was 1000, 900, 800, 700, 600 and 500 Pa·s, respectively, the time of the hot-melt glass flowing to the specified distance on the SiO2 substrate could also be obtained, as shown in Figure 4. It could be seen that the lower the viscosity of the hot-melt glass, the shorter the flow time to the specified distance, the higher the flow speed and the better the wettability. For the same SiO2 substrate, the solid surface energy of hot-melt glass with a different viscosity was the same, but the lower the viscosity was, the higher the wettability was. Comparison of Wettability of Hot-Melt Glass Solution between SiO2 and Au Substrates Then, the viscosity of the hot-melt glass was kept at 1000 Pa·s, the simulation was carried out on the SiO2 and Au substrates and the simulation results, as shown in Figure 5, were obtained. It could be clearly seen from Figure 5 that it took 22 s for the hot-melt glass to flow to the junction on the SiO2 substrate and 16.5 s on the Au substrate. With the same viscosity, the surface free energy of the liquid was the same. Yet, combined with the Comparison of Wettability of Hot-Melt Glass Solution between SiO 2 and Au Substrates Then, the viscosity of the hot-melt glass was kept at 1000 Pa·s, the simulation was carried out on the SiO 2 and Au substrates and the simulation results, as shown in Figure 5, were obtained. It could be clearly seen from Figure 5 that it took 22 s for the hot-melt glass to flow to the junction on the SiO 2 substrate and 16.5 s on the Au substrate. With the same viscosity, the surface free energy of the liquid was the same. Yet, combined with the parameters in Table 2, the contact angle between the hot-melt glass and the Au substrate was smaller than that of SiO 2 , and the adhesion work and surface tension on the Au substrate were larger, so the wettability was higher and the flow velocity was higher. Comparison of Wettability of Hot-Melt Glass Solution between SiO2 and Au Substrates Then, the viscosity of the hot-melt glass was kept at 1000 Pa·s, the simulation was carried out on the SiO2 and Au substrates and the simulation results, as shown in Figure 5, were obtained. It could be clearly seen from Figure 5 that it took 22 s for the hot-melt glass to flow to the junction on the SiO2 substrate and 16.5 s on the Au substrate. With the same viscosity, the surface free energy of the liquid was the same. Yet, combined with the parameters in Table 2, the contact angle between the hot-melt glass and the Au substrate was smaller than that of SiO2, and the adhesion work and surface tension on the Au substrate were larger, so the wettability was higher and the flow velocity was higher. Experimental The micro pressure switch was packaged with nano glass powder. The hot-melt glass was transparent and the surface morphology was compact and smooth, as shown in Figure 6. However, because the wettability between the hot-melt glass and the Au electrode Experimental The micro pressure switch was packaged with nano glass powder. The hot-melt glass was transparent and the surface morphology was compact and smooth, as shown in Figure 6. However, because the wettability between the hot-melt glass and the Au electrode were stronger than that between the hot-melt glass and the SiO 2 substrate, the hot-melt glass flowed rapidly along the length direction of the Au electrode lead and spread out rapidly, resulting in little coverage of this part of the hot-melt glass. After measurement, the thickness of the hot-melt glass on the Au electrode lead was only 500 nm, as shown in Figure 6b. This thickness was not enough to form a sealed package during bonding. were stronger than that between the hot-melt glass and the SiO2 substrate, the hot-melt glass flowed rapidly along the length direction of the Au electrode lead and spread out rapidly, resulting in little coverage of this part of the hot-melt glass. After measurement, the thickness of the hot-melt glass on the Au electrode lead was only 500 nm, as shown in Figure 6b. This thickness was not enough to form a sealed package during bonding. The wettability of hot-melt glass to different materials varies greatly [20,21]. From the above simulation and experimental results, it could be seen that the wettability of hotmelt glass on the Au metal lead was good, so the volume of hot-melt glass passing through the Au metal lead decreased sharply. A silicon wafer sputtered when a large area of Au lines was selected and a thin layer of nano glass powder was manually coated on the whole surface and melted at a high temperature. Figure 7b showed that the amount of hot-melt glass on the Au metal leads was very small, and a small part shrank to the metal free area on the silicon wafer. It was proved that the wettability of glass paste on the Au wire was very strong and the adhesion thickness of the glass paste was not as good as that The wettability of hot-melt glass to different materials varies greatly [20,21]. From the above simulation and experimental results, it could be seen that the wettability of hot-melt glass on the Au metal lead was good, so the volume of hot-melt glass passing through the Au metal lead decreased sharply. A silicon wafer sputtered when a large area of Au lines was selected and a thin layer of nano glass powder was manually coated on the whole surface and melted at a high temperature. Figure 7b showed that the amount of hot-melt glass on the Au metal leads was very small, and a small part shrank to the metal free area on the silicon wafer. It was proved that the wettability of glass paste on the Au wire was very strong and the adhesion thickness of the glass paste was not as good as that of the silicon or glass. Based on the verification results, it was proposed that a SiO 2 isolation layer should be formed on the surface of the metal lead by chemical vapor deposition to reduce the wettability of the glass slurry in this area. The wettability of hot-melt glass to different materials varies greatly [20,21]. From the above simulation and experimental results, it could be seen that the wettability of hotmelt glass on the Au metal lead was good, so the volume of hot-melt glass passing through the Au metal lead decreased sharply. A silicon wafer sputtered when a large area of Au lines was selected and a thin layer of nano glass powder was manually coated on the whole surface and melted at a high temperature. Figure 7b showed that the amount of hot-melt glass on the Au metal leads was very small, and a small part shrank to the metal free area on the silicon wafer. It was proved that the wettability of glass paste on the Au wire was very strong and the adhesion thickness of the glass paste was not as good as that of the silicon or glass. Based on the verification results, it was proposed that a SiO2 isolation layer should be formed on the surface of the metal lead by chemical vapor deposition to reduce the wettability of the glass slurry in this area. The experimental process and results are shown in Figure 8. The SiO2 isolation layer successfully reduced the wettability of the hot-melt glass on the Au metal lead, and this part of the hot-melt glass was consistent with that on the glass sheet. The thickness of the hot-melt glass increased from 500 nm to 1.95 µ m. It could be seen that there was a significant difference between the thickness of the glass powder on the metal lead covered with The experimental process and results are shown in Figure 8. The SiO 2 isolation layer successfully reduced the wettability of the hot-melt glass on the Au metal lead, and this part of the hot-melt glass was consistent with that on the glass sheet. The thickness of the hot-melt glass increased from 500 nm to 1.95 µm. It could be seen that there was a significant difference between the thickness of the glass powder on the metal lead covered with a thin layer of SiO 2 and that on the metal lead not covered with SiO 2 . This proved once again that the wettability of hot-melt glass on a Au substrate was better. a thin layer of SiO2 and that on the metal lead not covered with SiO2. This proved once again that the wettability of hot-melt glass on a Au substrate was better. Conclusions The wettability of the molten glass powder was studied by simulation and experimentally. The conclusions obtained in this research are summarized as follows: 1. The smaller the viscosity of the hot-melt glass, the smaller the surface energy of the liquid, the greater the wettability and the higher the flow velocity on SiO2. When the
4,638.2
2022-10-01T00:00:00.000
[ "Materials Science" ]
MtDNA population variation in Myalgic encephalomyelitis/Chronic fatigue syndrome in two populations: a study of mildly deleterious variants Myalgic Encephalomyelitis (ME), also known as Chronic Fatigue Syndrome (CFS) is a debilitating condition. There is growing interest in a possible etiologic or pathogenic role of mitochondrial dysfunction and mitochondrial DNA (mtDNA) variation in ME/CFS. Supporting such a link, fatigue is common and often severe in patients with mitochondrial disease. We investigate the role of mtDNA variation in ME/CFS. No proven pathogenic mtDNA mutations were found. We then investigated population variation. Two cohorts were analysed, one from the UK (n = 89 moderately affected; 29 severely affected) and the other from South Africa (n = 143 moderately affected). For both cohorts, ME/CFS patients had an excess of individuals without a mildly deleterious population variant. The differences in population variation might reflect a mechanism important to the pathophysiology of ME/CFS. www.nature.com/scientificreports www.nature.com/scientificreports/ Clinically proven pathogenic mtDNA mutations are recognized as a cause of maternally-inherited disorders, with a minimum prevalence rate of 1 mutation in 5,000 (20 per 100,000) people 7 . Such mtDNA mutations frequently cause multisystem disorders with fatigue being prevalent in these patient groups 8 . Hundreds or even thousand of copies of mtDNA are present in a /cell, this is linked to the energetic demands of the cell. These copies can either be identical, a state called homoplasmy, or two or more species of mtDNA can be present in a state referred to as heteroplasmy. Thus, a possible approach is to investigate whether mtDNA mutations are present either at heteroplasmy levels sufficient to cause mtDNA disease or at sub-clinical levels that are too low to cause primary mtDNA disease, but are perhaps at sufficient levels to act as a risk factor or affect the course of a complex disease such as ME/CFS. Beyond such recognised pathogenic mtDNA mutations, many studies have suggested a role for common mtDNA variants in complex diseases, with mtDNA variants either modulating susceptibility to a disease and/ or affecting the course of the disease, including those where fatigue is an important feature of disease 9,10 . While many studies have reported a significant association of specific mtDNA haplogroups with a number of complex disorders, there is often substantial disagreement among different studies examining the same phenotype. Another possibility is that rare mtDNA population variants might have a role in the disease process, since rare variants are predicted to be more mildly deleterious as such variants are removed by purifying selection over generations 11 . This is supported by recent works 12,13 , using a computational tool, MutPred, that has been widely used in the mitochondrial context 14 . Given this it might be expected to see a greater number of rare/mildly deleterious variants in any given patient group 15 , which may implicate a role in the disease process. In addition to acting as a susceptibility factor to disease, it has also been suggested that mtDNA variants might modify the course of common complex diseases 16 . In line with this a recent study of 193 ME/CFS patients and 196 age-and gender-matched controls, reported haplogroups J, U and H, as well as eight mtDNA SNPs are significantly associated with particular ME/CFS symptoms in patients (course of disease), but not with increased susceptibility to ME/CFS (onset of disease) 10 . In the current study, using mtDNA sequence data from ME/CFS patients from both the United Kingdom (UK) and South Africa (RSA), we ask whether mtDNA population variants alter susceptibility to ME/CFS. Due to the maternal inheritance pattern of mtDNA, linking mtDNA variation to common complex traits has been recognized as a difficult task, with models used in nuclear association genetics proving unsuitable 17,18 . There have been a number of improvements in the design of haplogroup association studies over the last 10 years, such as the importance of a replicate cohort being recognized 15,19 . However, there are still many doubts as to whether this simple methodology is the correct approach to assess the role of mtDNA variation in complex disorders. Indeed, a number of prior studies have applied the haplogroup association approach in complex diseases where fatigue is an important clinical feature, such as multiple sclerosis 9,20 , but the results were inconclusive. Here, we applied an improved approach which focuses on variants predicted computationally to be mildly deleterious, most of which are rare. This approach takes advantage of the advances in bioinformatics 21,22 using the mtDNA-server 23 and MutPred tools 14,23 . Results Heteroplasmy analysis. To investigate the possibility that ME/CFS patients harbour clinically proven mtDNA mutations, either above the threshold required for mtDNA disease or at a sub-threshold level, the complete mitochondrial genomes of the (n = 89 moderate; 29 severely affected) patients from the UK and (n = 143) South African ME/CFS patients were analysed. Only two mutations associated with clinically proven mitochondrial disease were seen, m.3337G>A 24 in a patient and m.11778G>A 25 in a control participant, both in the South African cohort. However, these mutations were heteroplasmic at the 5% and 16.5% level respectively, and as a result these mutations are unlikely to have a phenotypic role. No other clinically proven mtDNA mutations were detected in any of the patient or control groups. It should be noted that such frequencies of low level pathogenic mtDNA mutations are entirely consistent with large population studies considering this question 26 , as well as prior studies on this phenotype 10,27 . Haplogroup distributions. Human mtDNAs can be assigned to one of several haplogroups. This traditional classification system was originally based on the presence or absence of one or a small number of likely benign polymorphisms 28 , rather than mutations with functional consequences. However, the advent of large-scale sequencing has led to the identification of a vast number of haplogroup-specific polymorphisms, allowing haplogroup classifications to be expanded into more and more subgroups 29 . To put our analysis into the context of prior studies, a simple haplogroup distribution analysis was performed. To ensure a robust comparison between the groups from the UK and the RSA, only the frequencies of the nine haplogroups of European origin (HVUKTJIXW) from the RSA ME/CFS cohort were used, as the group from the UK (North East England) was of lower diversity. The haplogroup distributions for the UK and RSA cohorts -ME/CFS and control -are given in Table 1. Comparing the haplogroup distributions between the UK ME/CFS patients and controls with a Monte Carlo-based approach, no significant difference was observed (ns, p = 0.31). Similarly, there was no significant difference in haplogroup distribution between RSA patients and controls (ns, p = 0.55). Thus as in the work of Billing-Ross et al., mtDNA haplogroup was not seen to affect the susceptibility to ME/CFS 10 . Network analysis. Functional networks were produced for the UK cohort (Fig. 1a) and the RSA cohort ( Fig. 1a). These networks represent all possible least complex phylogenetics trees, based on the variants included as described in Methods. Sequences that are very similar cluster together in smaller or larger nodes. Nodes are ordered along a phylogenetic tree with links indicating the variants by which a connected node deviated from another 30 . In both cohorts, controls were more abundant in nodes that were separated by several variants from www.nature.com/scientificreports www.nature.com/scientificreports/ (7) 14 (14) U 5 (17) 13 (15) 11 (17) 32 (17) 20 (20) www.nature.com/scientificreports www.nature.com/scientificreports/ the central, haplogroup H-dominated nodes. These peripheral nodes frequently contained mtDNAs that could be assigned to haplogroups T, K and U, although as noted in the previous section, no significant haplogroup associations were found. Individuals with a potentially mildly deleterious mtDNA variant. Haplogroup association studies are confounded by a number of factors, including population stratification, with this and other factors resulting in elevated type 1 error 18,31 . These limitations, coupled with the relatively low statistical power of haplogroup association studies 32 , underscore the fact that new methods are required for discerning the pathogenic and/or etiological role of mtDNA mutations in a common complex disorder 21,22 . We determined the number of individuals with mtDNA sequences containing a variant with a MutPred score over the 0.5 threshold, such variants being considered "actionable hypothesis" or candidates for having a functional effect 14 , here referred to as "mildly deleterious". The number of controls and patients in each population that harbours none, one or more mildly deleterious variants are summarised in Table 2. Figure 2 illustrates these numbers as percentages of each group. For the cohorts collected in the North East of England, 49 (55%) of the moderate ME/CFS patients were without a mildly deleterious variant. Comparing this result to the controls from the United Kingdom, only 17 (27%) of these individuals had no such variants. Conversely 45% of moderate ME/CFS patients had one or more mildly deleterious variants, compared to 72% of controls from the UK who harboured one or more such variants (see Fig. 2). These differences were found to be highly significant (p = 0.0008) with a Fisher's Exact test. This observation suggests that individuals with ME/CFS are less likely to have an mtDNA variant that is predicted to have a functional impact. As mentioned above, as many as 25% of those with ME/CFS are severely affected, being house or even bed bound. A cohort of such patients was recruited in the North East of England (UK). Of the 29 severe ME/CFS patients 15 (52%) were without a mildly deleterious variant. Comparing these numbers with those of UK controls using a Fisher's Exact test, a significant difference (p = 0.03) was found. Thus even the most severely affected patients had fewer mildly deleterious variants than the controls. It is important to note that when the moderately and severely affected patients are compared using a Fishers exact test no difference is observed (p = 0.83). The analysis was repeated in the RSA cohort, including only those ME/CFS cases that fall within the nine haplogroups designated as European (HVUKTJIXW) to ensure that differences in lineage diversity could not impact upon the results. Conducting the same analysis with the ME/CFS 143 patients from the RSA, there were 80 (56%) individuals with predicted mildly deleterious variants; considering the healthy controls from the RSA, 40 (41%) individuals had no such variants. That leaves 59% of controls from South Africa compared to only 44% of ME/ CFS patients who harbour one or more mildly deleterious mtDNA variants. A Fisher's exact test again found these differences to be statistically significant (p = 0.03). Our analyses thus show, for both UK and RSA cohorts, those with ME/CFS have fewer mildly deleterious variants than controls. Discussion We have investigated a possible role of mtDNA variants in ME/CFS. This was done firstly by reviewing the sequence data of each individual for the presence of clinically proven pathogenic mtDNA mutations associated with primary mitochondrial disease. We ruled out the possibility that previously identified pathogenic mtDNA mutations are contributing to ME/CFS, a result that was not unexpected 10,27 . Secondly, we considered the possible role of population variants in ME/CFS, with a focus on variants that are predicted to be mildly deleterious by computational methods. These variants are frequently rare at the population level resulting in an absence of population stratification, and thus in a lower chance of false association or type 1 error. Such false associations are believed to be common in mtDNA association studies applying the haplogroup association model 18 . The current approach has been utilized in two prior studies 21,22 that considered Alzheimer and cardiovascular disease respectively. We compared the number of individuals in both the ME/CFS patient and control groups with variants classified as mildly deleterious, to the number of individuals without such variants. In both the UK and RSA cohorts, there was a significant difference between the patients and the controls, with the ME/CFS patients having a higher percentage of patients without a variant predicted to be mildly deleterious. Although surprising, this observation was seen in two independent cohorts, suggesting that these differences are not due to chance. One of the reasons for this observation might have been that deleterious variants confer disease susceptibility in only severely affected ME/CFS patients or modulate fatigue severity among ME/CFS patients. However, when comparing www.nature.com/scientificreports www.nature.com/scientificreports/ patients that were severely affected with controls in the UK cohort, again ME/CFS patients had fewer variants predicted to be mildly deleterious. Furthermore, there was no difference in the number of patients with such variants between the moderate and severely affected groups. Taken together, these data suggest that our observation is not the result of a simple patient stratification effect, although additional replication cohorts of moderate and severely affected ME/CFS cohorts are needed to confirm this. Number of individuals harbouring variants with MutPred scores >0.5 (% in brackets) It has been proposed that those with ME/CFS might not have a problem with ATP production 33 , but rather with ATP utilization. Therefore, we consider this genetic difference in mtDNA variation as an accurate observation which may prove to have a biological relationship to the function and regulation of the OXPHOS system and subsequent wide-ranging immediate and downstream consequences on energy metabolism 34,35 . It should be considered that the wider homeostasis and/or responsiveness of the various elements of energy metabolism are affected in ME/CFS, rather than merely single "segments" of energy pathways such as ATP production. A number of mitochondrial abnormalities in ME/CFS have previously been reported, indicating that mitochondrial dysfunction may play a role in the pathogenesis of disease, at least in a sub-set of patients 36 . Previously ME/CFS patients have been shown to have significantly lower mitochondrial function than healthy controls 5 . Detection of these mtDNA variants has the potential to be used as a tool for measuring mitochondrial dysfunction in ME/CFS. Another avenue of investigation in the mitochondrial field is copy number analysis. Mitochondrial DNA copy number (mtDNAcn) has been reported as an indirect representative of mitochondrial function and as a biomarker of disease with studies. As an example, in Parkinson's disease (PD), mtDNAcn was elevated in the pedunculopontine nucleus (PPN), which is a brainstem region associated with progression of motor and non-motor symptoms of PD 37 . Other neurodegenerative conditions in which alterations in mtDNAcn have been reported include Multiple Sclerosis. The results in this phenotype are conflicting, with some showing evidence for reduced copy number 38 , while others indicate an increase 39 . Additionally. MtDNAcn was shown not to be associated with fatigue status in Primary Sjögren's Syndrome 30 . Taken together these papers demonstrate that in heterogeneous diseases with a variable course small single time point studies will produce data that is difficult to interpret and is likely to conflict between studies. www.nature.com/scientificreports www.nature.com/scientificreports/ In conclusion, this is the first paper to demonstrate mitochondrial genetic differences between ME/CFS patients and controls. It also demonstrates the power of mtDNA analysis focused on variants likely to be of a functional effect to detect differences between case and control cohorts where the traditional haplogroup association method frequently fails to do so. Future studies need to include larger cohorts from multiple centres, within and between nations, with standardized sample handling. These studies need to take a multi-disciplinary approach linking genetics, including mtDNA copy number analysis and bioenergetics. Given the changing nature of the disease, longitudinal studies would seem to be essential to further understanding by allowing us to determine how mtDNA varation and mitochondrial dysfunction relates to fluctuations in symptom severity. Methods ethical approval and informed consent. This study was conducted in two independent cohorts, one from England and the other from South Africa. All experimental protocols were approved by the corresponding intuitional committees, namely the CRN National Coordinating Centre (CRNCC) | NIHR Clinical Research Network (CRN) in England (UK ME/CFS -IRAS ID 221364) and the Health Research ethics committee (HREC) of the North-West University in South Africa (SABPA: NWU 00036-07-S6, CFS: NWU 00102-12). All study procedures carried out in accordance with relevant guidelines and regulations of Newcastle University and North-West University. All participants gave informed consent and were over the age of 18. patient cohorts. We used two well-characterised cohorts of ME/CFS patients, one from the North East of England (n = 89 moderately affected; 29 severely affected) and the other from South Africa (n = 143 moderately affected). Both cohorts met the Fukuda diagnostic criteria 1 . Potentially confounding causes of fatigue, including depression, were excluded in all patients. The two control cohorts were regionally matched and had been collected for two prior studies. The controls from the North East of England (n = 64) were used previously for a variant load study on Alzheimer's disease 15 . The control cohort from South Africa (n = 98) was comprised of healthy high school teachers assembled previously for a study on hypertension and diabetes 22 . sequencing. Sequencing of the ME/CFS samples from both the UK and RSA was carried out at Source Bioscience using a fluidigm technology. The sequencing methodology for the controls has been described previously for the UK controls 15 and for the RSA controls 22 . The reference sequence used in all datasets was the revised Cambridge reference sequence (rCRS). Network analysis. After selective pruning of the total mtDNA variants to those in (a) protein-encoding genes with MutPred pathogenicity scores above 0.5, and (b) rRNA and tRNA variants, a functional maximum parsimony (MP) network analysis was performed with the NETWORK (version 5.0.0.3) software package (http:// www.fluxus-engineering.com/sharenet.htm). Transversions, being chemically less likely to occur, were weighted three times more than transitions. Star contractions were performed using a maximum radius of 5. This step was followed by reduced median (RM) 40 processing (reduction threshold r was 1). The "Frequency >1" criterion was activated to exclude sequences which are unique to the dataset. Network figures were produced using NETWORK Publisher (version 2.1.1. Data analyses. Sequencing data was processed using the online mtDNA-server (mtdna-server.uibk.ac.at) 23 tool. With this tool, homo-and heteroplasmy variants were identified. The pathogenicity status of heteroplasmic variants were assessed using various clinical and online databanks e.g. MitoMap, and the application of accepted scoring criteria 41,42 . For all other analyses described below, only homoplasmic variants and those with a heteroplasmy level above 90% were used, thus only inherited and not somatic variants were considered. Haplogroups were assigned using the online Haplogrep 2.0 tool (haplogrep.uibk.ac.at) 43 . The control data from the UK is Sanger Sequence data, the processing of which is described in 44 . Classification and selection of mtDNA variants, as compared by group. Analysis using variants predicted to be mildly deleterious are less prone to the effects of population stratification because they typically analyse rare variants that are not stratified between geographical locations. The variants are assessed using the MutPred program, which assigns a "pathogenicity" score between 0-1 (an amino acid change with 0 is predicted to be perfectly benign). A score above 0.5 for an amino acid change is classified as an "actionable hypothesis" 14 . Variants with pathogenicity scores below the "actionable hypothesis" threshold (0.5) are considered less likely to be deleterious or to have an impact on protein function; instead, they are more likely to be common population variants. The inclusion of these more numerous but low-scoring, low-impact variants in the analyses could be problematic, especially because the resultant "noise" differs greatly among different population groups 45 . Therefore, as in some of our previous studies, we have not included variants scoring below 0.5 in the current analyses 22 . statistical analyses. All statistical analyses were performed using SPSS Statistics (Version 25), Prism (Version 23) or GraphPad (Version 7). Haplogroup distribution between ME/CFS patient cohorts and their corresponding control were performed using a Monte Carlo based approach, this methodology is part of the standard package but more accurate than the Chi Square estimation. Fisher's exact tests were utilised to compare the number of patients and controls in each cohort that have mtDNA variants with MutPred scores above 0.5 (mildly deleterious) with those who do not.
4,655
2019-02-27T00:00:00.000
[ "Biology" ]
Optimization of the parameters of a partially regular microrelief by vibration rolling method Experience of operating machines shows that their quality depends on the nature of the contact of mating parts. The irregular nature of the surface microrelief, which is due to traditional treatment methods used, arises difficulties in solving three main problems of microgeometry optimization: reliable, theoretically substantiated normalization, technological support, accurate measurement and control. This determined the direction in solving the problem of increasing the accuracy and reliability of the surface quality normalization, that is the microrelief normalization. At present, there are a large number of technological methods for surface treatment aimed to form a regular microrelief on it. One of the most common and studied methods for the regular microrelief formation is the vibration rolling method based on thin plastic deformation of the surface metal layers and a complex relative displacement of the treated surface and the deforming element. Significant progress in the surface quality normalization was achieved after the introduction of GOST 24773-81, the standard for regular microreliefs. For example, the nomenclature of parameters and characteristics of partially regular microreliefs includes the relative area occupied by regular inhomogeneities FH. FH is a value expressed as a percentage of the area occupied by regular inhomogeneities to the area of the treated surface. If FH is determined for the 2A·Sk area within the boundaries of a microrelief element at different axial steps of regular inhomogeneities, the microgeometry of a partially regular microrelief can be described ambiguously. To avoid this, it is necessary to consider the multiplicity of the ratio of the amplitude A to the axial step So. Introduction Experience of operating machines shows that their quality depends on the nature of the contact of mating parts [4]. Therefore, the focus is on optimization of the contact surface quality. Among different characteristics such as roughness and waviness, physical, mechanical, chemical properties and microstructure of the surface layer, which determine the surface quality of machine parts, optimization of the microgeometry of the contact surfaces is one of the most important. Requirements for the parameters of microgeometry are based on their relationship with the functional indicators of machine parts. The values of these parameters can be calculated using theoretical or empirical equations for the relationship between the operational characteristics of machine parts and their junctions and the quality characteristics of surfaces. The complexity of solving the problem of optimizing the microgeometry of contact surfaces is aggravated by the irregular nature of the surface microrelief, which is due to traditional treatment methods used. This microrelief causes difficulty in solving three main problems of microgeometry optimization: reliable, scientifically grounded normalization, technological support, accurate measurement and control. This determined the direction in solving the problem of increasing the accuracy and reliability of the surface quality normalization, that is the microrelief normalization. A complex technological problem is solved in various ways, both in relation to the method of influencing the processed material, and in relation to the principle of regularization of the microrelief. The complexity of the task is compounded not only by the fact that it is necessary to create a regular microrelief on the surfaces of various materials, but also to "manage" it very subtly and within large limits, i.e. to vary the values of all its parameters: height, step and area. At the same time, the variation of the value of each parameter should be independent, i.e. such that when the value of one parameter changes, the values of the others remain unchanged. For example, when changing the height of irregularities, their pitch should not change, as it happens when turning and grinding. Almost all known methods of forming regular microreliefs do not meet these requirements. At present, there are a large number of technological methods for surface treatment aimed to form a regular microrelief on it [2]. One of the most common and studied methods for the regular microrelief formation is the vibrational rolling method developed by Professor of LITMO Yu. G. Shneider, which is based on thin plastic deformation of the surface metal layers and a complex relative displacement of the treated surface and the deforming element. [6]. Recently, a large number of studies, laboratory and operational tests of various machine parts and devices with a regular microrelief have been carried out in the Russian Federation and abroad, which revealed their new operational characteristics higher than those of parts treated by traditional methods. [3]. Problem statement Normalization of the microgeometry should provide a complete description to ensure its optimization. The microrelief inhomogeneity complicates the control of the geometric parameters of the surface quality and their standardization. Thus, when standardizing in accordance with GOST 2789-73, it is necessary to take average values as the microrelief parameters, for example, Ra -arithmetic mean of the profile deviation; Rz -height of the profile inhomogeneities at ten points; Sm -average value of the step of the profile inhomogeneities within the base length; S -average value of the steps of the profile projections that are within the base length. This complicates the measurement and control of the microgeometric characteristics of the real surface. For a more complete description of the surface roughness, the harmonic analysis is used, when the surface profilogram can be represented as a finite set of harmonics [4]. Significant progress in the surface quality normalization was achieved after the introduction of GOST 24773-81, the standard for regular microreliefs. A high degree of homogeneity of the microgeometry over the entire surface formed by vibration rolling characterizes its microrelief using geometric parameters that remain unambiguous over the entire working surface. For example, the nomenclature of parameters and characteristics of partially regular microreliefs includes the relative area occupied by regular inhomogeneities FH. FH is a value expressed as a percentage of the area occupied by regular inhomogeneities to the area of the treated surface. The FH parameter determined for the 2A·Sk area is of particular interest. Materials and methods The vibration rolling method is used to improve the operational characteristics of machine parts. The vibration rolling method is based on two principles: refusal of cutting, the use of thin plastic deformation, and complication of the kinematic processing [6]. The kinematic dependencies that characterize the vibration rolling method are much more complicated than those for ball rolling. Figure 1 presents the parameters that determine the mode of vibration rolling of cylindrical surfaces: nwworkpiece rotation frequency, rpm; S -feed of the deforming element per a turn of the workpiece, mm/rev; nd.el. -number of oscillations of the deforming element, min; А -oscillation amplitude, mm; dw -the workpiece diameter, mm; P -force of the deforming element indentation, N. The deforming element can be a ball, spherical diamond, carbide or other tip. The control of the formation of a regular or partially regular microrelief implies varying the ratio of motion speeds of the workpiece and the deforming element. In terms of the nature and density of the arrangement of the sinusoidal grooves, the resulting microrelief is divided into four types: grooves do not contact; contact; intersect; superimpose to form a microrelief without the original surface areas. Figure 3 shows types of partially regular microreliefs with continuously or discretely Partially regular microreliefs formed by the vibration rolling method differ from traditional ones formed by all other treatment methods by the homogeneity of all geometric parameters, which are functionally related to the mode parameters, and provide the opportunity to normalize the microrelief parameters by setting them on the basis of analytical calculations through the parameters of the vibration rolling mode. It should be noted that the values of the regular microrelief parameters are reproduced with high accuracy at the appropriate vibration rolling mode and the need to measure them using instruments is eliminated. The control of the parameters of the vibration rolling mode is sufficient. At present, the vibration rolling method is being developed through the increased scope of application, and the following technological problems are being solved: -stabilization of the process in terms of the depth and shape of the resulting sinusoidal groove; -creation of methods for calculating the vibration rolling parameters and the microrelief characteristics depending on the task set; -creation of effective technological equipment for vibration rolling using CNC machines and flexible production systems [1]. Optimization of the parameters of a partially regular microrelief The FH parameter of a partially regular microrelief most fully determines almost all the operational characteristics of surfaces and, first of all, the actual contact area of the solid surface with another surface, the oil absorption of the surface, the ability to prevent wear products in the sinusoidal groove volume from carrying of the contacting friction pairs to the surface. According to GOST 24773-81, FH is a value expressed as a percentage of the area occupied by regular inhomogeneities to the area of the treated surface. It is of interest to determine FH for the 2A·Sk area within the boundaries of the microrelief element at various axial steps S of regular inhomogeneities. Consider the elements of a partially regular microrelief formed by vibration rolling presented in Fig. 4 [5]. The trajectory of the deforming element center is described by the equation To simplify the calculation, assume that the upper and lower boundaries of the sinusoidal groove are described by the equation where r -radius of the imprint of the deforming element on the plane. Previously, calculations were carried out to confirm the ambiguity of the parameter Fн at different axial step So on the 2A·Sk area at 2A/(k + 1)<Sо <2A/k when the symmetry is violated [5]. Analysis of the numerical values of the axial step So and the amplitude of regular inhomogeneity A specified in the standard [7] shows that not all numerical values can provide the condition So = 2A/k This necessitates further improvement of GOST 24773-81. Conclusion and recommendation 5. 1 The parameter Fн of a partially regular microrelief most fully determines practically all the operational characteristics of surfaces and, first of all, the actual contact area of a solid surface with another surface, oil absorption of the surface, the ability to prevent foreign particles from carrying to the contact surface. 5.2 An important parameter Fн, the relative area occupied by regular inhomogeneities, ambiguously describes the microgeometry of a regular microrelief with the ratio of the amplitude parameter A and the axial step S.
2,396.8
2021-01-01T00:00:00.000
[ "Materials Science" ]
Estimated Cases Averted by COVID-19 Digital Exposure Notification, Pennsylvania, USA, November 8, 2020–January 2, 2021 We combined field-based data with mathematical modeling to estimate the effectiveness of smartphone-enabled COVID-19 exposure notification in Pennsylvania, USA. We estimated that digital notifications potentially averted 7–69 cases/1,000 notifications during November 8, 2020–January 2, 2021. Greater use and increased compliance could increase the effectiveness of digital notifications. a pillar among COVID-19 prevention strategies, especially before vaccine availability (1,2). However, standard CICT relies on staff to reach cases and close contacts, which is labor intensive, and CICT programs often become overwhelmed when caseloads surge (3)(4)(5). Standard CICT also relies on case investigation interviews to identify contacts; thus, it is prone to recall and participation bias and might not identify all potential exposures, such as interactions between strangers in public spaces. COVID-19 exposure notification smartphone applications (apps) can alleviate those challenges by automatically notifying app users when they have been near other users who reported positive SARS-CoV-2 results (herein referred to as cases). Pennsylvania, USA, and 26 other states implemented digital exposure notifications to complement their standard CICT programs (6). However, few studies have evaluated the effectiveness of digital notifications in the United States (6,7). We estimated the number of cases and hospitalizations averted by Pennsylvania's digital notification system, COVID Alert PA app. We also investigated strategies to increase the system's efficiency and its effects on the estimated number of cases and hospitalizations. The Study During case investigation interviews in Pennsylvania, digital notification app users were identified and given a validation code to enter into their app. The app then automatically sent anonymous notifications to other users identified through smartphone Bluetooth technology as potentially exposed to the person testing positive for COVID-19 (Appendix, https://wwwnc. cdc.gov/EID/article/29/2/22-0959.App1.pdf). The Pennsylvania Department of Health (PA DoH) collected data on the performance of standard CICT and digital notification apps (Table). We aggregated those data across all counties, excluding Philadelphia County (Appendix), for 8 weeks, November 8, 2020-January 2, 2021 (Table). We extracted the daily number of COVID-19 cases from the Centers for Disease Control and Prevention (CDC) COVID Data Tracker (8). We used CDC's COVIDTracer modeling tool to estimate cases and hospitalizations averted by digital notifications during the 8-week study period (1,2,9). COVIDTracer uses an epidemiologic model to illustrate the spread of COVID-19 and effects of CICT and other nonpharmaceutical interventions (NPIs). We calculated a summary effectiveness measure for CICT and digital notification apps from the various data PA DoH collected and input this measure to the model (Table). We defined this summary effectiveness measure as the proportion of cases that entered isolation and contacts that quarantined in response to CICT and digital notification apps, and the number of days required to do so (i.e., number of days from exposure to isolation or quarantine). We further assumed 60%-100% of interviewed cases and monitored contacts fully adhered to isolation and quarantine guidelines, and that 10%-50% of notified but not monitored contacts complied with quarantine guidance (10)(11)(12). To calculate the number of days from exposure to isola-tion or quarantine, we averaged the number of days between case interviews (triggering case isolation) and contact notifications (triggering contact quarantine). We performed 2 sensitivity analyses by varying the estimated number of days from infection to isolation by +1 day and the weight used to estimate the overall proportion of cases isolated and contacts quarantined (Appendix). Tables 4, 5). The low-value results from a scenario assuming 50% of digital notifications were sent to contacts that were already notified by Department of Health staff members and 10% of notified contacts followed quarantine guidance. The high-value results from a scenario assuming all digital notifications were sent to contacts that were not notified via standard CICT and 50% of notified contacts followed quarantine guidance. We derived CICT program effectiveness from reported data, but data were not available to estimate effectiveness of other NPIs, such as social distancing and mask-wearing. Therefore, we used the tool to estimate the effectiveness of other NPIs by fitting the model-generated curve to observed case curve (Appendix). Finally, to show what might have happen without the digital notifications, we simulated a hypothetical case curve by replacing the CICT effectiveness input with a value excluding contributions of the digital notifications. We considered the difference between cases in the simulated curve and reported cases as the estimated cases averted by the digital notifications. We generated a range of 18 results by varying public compliance with isolation and quarantine guidance and the degree to which recipients of digital notifications were also notified by the PA DoH staff members. First, we assumed no overlap (i.e., all digital notifications were sent to contacts who were not notified by the DoH staff); then, we assumed a 50% overlap (Appendix Tables 4, 5). We also calculated the number of hospitalizations averted by multiplying the estimated number of averted cases by age-stratified infection-to-hospitalization rates (9). We did not account for vaccination because only 0.1% of Pennsylvania's population was fully vaccinated during the study period. Between its launch in late September and the end of the study period, Pennsylvania's digital notification app was downloaded 638,797 times, accounting for ≈5.7% of the population; 56% (n = 356,835) of downloaded apps were actively used, accounting for 3.2% of the population. In all, 786 interviewed casepatients (0.2% of all cases) had the digital notification app installed on their smartphones, among whom <50% (n = 390) used the app to notify others of potential exposure, totaling 233 digital notifications during the 8-week period (Table). We estimated those digital notifications averted 2-16 additional cases (7-69 cases/1,000 notifications) and <1 hospitalization (Figure 1; Appendix Tables 4, 5). That range reflects uncertainties in both public 428 Emerging Infectious Diseases • www.cdc.gov/eid • Vol. 29, No. 2, February 2023 Overlap between standard CICT and digital notifications in a study of estimated cases averted by COVID-19 digital exposure notification, Pennsylvania, USA, November 8, 2020-January 2, 2021. During the study period, standard CICT resulted in interviews and contact elicitation from 20% of the reported cases (blue, shaded circle) and 3.2% of the population actively used the digital notification app (red, shaded circle). During case interviews, app users were provided validation codes for initiating contact notifications via their digital notification app (overlap of red and blue shaded circles; 0.2% of all cases). The effectiveness will be greater in the following scenarios. First, any case in the overlap of shaded red and unshaded blue circle (including persons who used at-home testing) can generate notifications via the app. Second, a larger shaded red circle reflects a higher proportion of the population actively using the digital notification app. Last, a larger unshaded black circle reflects a situation where more individuals can generate validation codes and receive exposure notifications. CICT, case investigation and contact tracing. compliance and the degree of overlap between notifications received via the digital notification app and DoH staff. In comparison, we estimated standard CICT averted 10,168-17,151 cases and 250-421 hospitalizations during the same period. Conclusions Although just 3.2% of the state's population used the COVID Alert PA app, we estimated that 7-69 cases were averted for every 1,000 digital notifications sent during the 8-week study. Those estimates represent a single locality and should not be generalized to other jurisdictions. However, the methods, and the publicly accessible modeling tool, could be used to adjust for differences in uptake, compliance, and epidemic curve to estimate the effect of digital notifications in other jurisdictions. Greater use, increased compliance, or changes to digital notification system operations might increase its effectiveness ( Figure 2). UK researchers assessing a similar app estimated that 167-349 cases were averted for every 1,000 notifications with a 28% adoption rate (13). Greater use appears achievable based on multiple reports indicating >17% of the population activated digital notification apps in 11 states and participation approached 50% in states where adoption was greatest (6,7). When we examined hypothetical scenarios in which 50% of the population actively used the app in Pennsylvania, all else remaining equal, we found that up to 3,995 cases could have been averted by digital notifications during the study period (Appendix). The potential increase in cases averted by digital notifications requires additional research and should consider other factors, such as alternative digital notification system operations. For example, effectiveness might be improved with automatic digital notification versus relying on case-patients to initiate contact notification after being interviewed. Some jurisdictions also started permitting users to self-report as COVID-19-positive and initiate digital notifications on the basis of at-home testing, which could improve both the number and timeliness of digital notifications (14). Although such gains are promising, they are moderated by the public's compliance with digital notifications and technological limitations of Bluetooth signaling, leading to missed exposures and potentially false notifications. Our findings suggest that the use of digital notification apps helped avert COVID-19 cases in Pennsylvania, although its effectiveness was limited by numerous factors, most notably limited use. The results also suggest opportunities exist to further examine and improve digital notification systems and their use during future outbreaks (Figure 2). Public health practitioners should explore ways to increase public participation in digital notification apps and to improve system efficiency by increasing the timeliness, coverage, and accuracy of digital notifications. About the Author Dr. Jeon is a senior statistician in the Health Economics and Modeling Unit, Division of Preparedness and Emerging Infections, National Center for Zoonotic and Emerging Infectious Diseases, Centers for Disease Control and Prevention. Her research interest includes leveraging statistical and mathematical models to estimate impact of public health interventions.
2,214.8
2023-01-13T00:00:00.000
[ "Mathematics" ]
Cost Analysis for Patients with Ventilator-Associated Pneumonia in the Neonatal Intensive Care Unit The concept of improving the quality and safety of healthcare is well known. However, a follow-up question is often asked about whether these improvements are cost-effective. The prevalence of nosocomial infections (NIs) in the neonatal intensive care unit (NICU) is approximately 30% in developing countries. Ventilator-associated pneumonia (VAP) is the second most common NI in the NICU. Reducing the incidence of NIs can offer patients better and safer treatment and at the same time can provide cost savings for hospitals and payers. The aim of the study is to assess the direct costs of VAP in the NICU. This is a prospective study, conducted between January 2017 and June 2018 in the NICU of University Hospital “St. George” Plovdiv, Bulgaria. During this period, 107 neonates were ventilated for more than 48 h and included in the study. The costs for the hospital stay are based on the records from the Accounting Database of the setting. The differences directly attributable to VAP are presented both as an absolute value and percentage, based on the difference between the values of the analyzed variables. There are no statistically significant differences between patients with and without VAP in terms of age, sex, APGAR score, time of admission after birth and survival. We confirmed differences between the median birth weight (U = 924, p = 0.045) and average gestational age (t = 2.14, p = 0.035) of the patients in the two study groups. The median length of stay (patient-days) for patients with VAP is 32 days, compared to 18 days for non-VAP patients (U = 1752, p < 0.001). The attributive hospital stay due to VAP is 14 days. The median hospital costs for patients with VAP are estimated at €3675.77, compared to the lower expenses of €2327.78 for non-VAP patients (U = 1791.5, p < 0.001). The median cost for antibiotic therapy for patients with VAP is €432.79, compared to €351.61 for patients without VAP (U = 1556, p = 0.024). Our analysis confirms the results of other studies that the increased length of hospital stays due to VAP results in an increase in hospital costs. VAP is particularly associated with prematurity, low birth weight and prolonged mechanical ventilation. Introduction The concept of improving the quality and safety of healthcare is well known. However, a follow-up question is often asked about whether these improvements are cost-effective. Due to the lack of reliable data to inform about quality and safety in healthcare, there are some hesitations to increasing investments, until the financial benefits are more clearly defined [1]. Healthcare is a dynamic industry where the assets (staff, technology, equipment) needed for success are becoming increasingly scarce and expensive. Despite rising care costs, pressure from payers to resist these increases continues to grow. Improving quality by reducing medical errors, length of stay and costs is an important alternative to scaling up and hiring more staff-key factors contributing to rising costs. Reducing the incidence of nosocomial infections can offer patients better and safer treatment, and at the same time can provide cost savings for hospitals and payers. The incidence of nosocomial infections (NIs) in the neonatal intensive care unit (NICU) is approximately 30% and accounts for up to 40% of reported neonatal deaths in developing countries [2]. Neonatal hospital infections, in addition to being the cause of a significant number of perinatal, neonatal and postnatal deaths, are also associated with increased healthcare costs. This is because the hospitalization of infected neonates is up to threefold longer than that of non-infected children [3]. Ventilator-associated pneumonia (VAP) is hospital-acquired pneumonia that develops in patients who have been intubated and have received mechanical ventilation for at least 48 h [4]. It is the second most common nosocomial infection and has a major impact on neonatal morbidity, survival, hospital costs and length of stay in the NICU [2,5,6]. The incidence of VAP in the NICU is difficult to pinpoint, as it is difficult to distinguish between new or progressive radiographic infiltrates due to neonatal pneumonia or due to the exacerbation of bronchopulmonary dysplasia and frequent episodes of atelectasis [7]. VAP occurs at higher levels among extremely low birth weight infants and is a major risk factor for complications and death (RR: 3.4; 95% CI: 1.20 to 12.32) [8]. VAP increases the length of stay in ICUs and in the hospital, and this results in increased costs of hospitalization. A few studies have been conducted in pediatric intensive care units (PICU) that might help determine the extent of the problem. A study from Nicaragua [9] estimated the average cost of hospitalization for a patient in the PICU with VAP at $9686, and $3779 for non-VAP patients. Romero et al. [10] calculated $6174.89 for the treatment of one episode of VAP. Studies from Iran [11] and the USA [12] have identified the prolonged hospital stay as the main driver of the attributable costs of up to $1040 and $51,157, respectively. Locally, in our scientific literature, there is a monograph published in which the authors did a landscape review on the global financial burden due to the treatment costs of nosocomial infections in NICU, but there are no studies that specifically estimate the costs for the treatment of VAP in the Bulgarian NICUs [13]. The aim of the current study is to assess the direct costs of VAP in the neonatal intensive care unit. Study Period and Settings This was a prospective study, conducted in the period between January 2017 and June 2018 in the NICU of University Hospital "St. George" Plovdiv, Bulgaria. The study was conducted in one hospital setting which is located in the second largest Bulgarian city, and the NICU of the hospital is a level 3 NICU with respect to the care provided for the patients. This neonatology unit is the only available option for the South-Central region population, which represents approximately 20% of the overall Bulgarian population [14]. The number of deliveries for the period of the study was 3306 (1700 in 2017, and 1606 in 2018). Additionally, 52 infants (23 in 2017, and 29 in 2018) were admitted for medical treatment from other hospitals. Overall, 352 neonates in 2017, and 343 in 2018 were admitted for intensive care treatment in the NICU. Definition and Identification of VAP VAP was defined as such by the criteria of the German system for surveillance of nosocomial infections NEO-KISS [15] and the Center for Disease Control and Prevention (CDC) [16]. Additionally, the criteria for VAP from the Bulgarian Medical Standard for Prevention and Control of Nosocomial Infections have been used [17]. VAP was defined as a clinically unstable respiratory condition with at least 2 or more clinical, and laboratory signs and symptoms and chest X-ray findings showing new or progressive infiltration and isolation of a pathogen from the endotracheal aspirate. The clinical signs included elevated temperature >37.8 • C, hypothermia, frequent apnea/bradypnea/tachypnea, bradycardia <80 b/m and change in tracheal secretions-color, quantity. Laboratory findings included-CRP >10 mg/L, abnormal white blood cells count (Leu > 30,000/mcg or Leu < 5000/mcg) and thrombocytopenia (Thr < 150,000/uL). Patient Characteristics During the study period, 507 neonates were followed up prospectively. Of them, 107 neonates were ventilated for more than 48 h and were included in the study. Data on the demographic characteristics of the patients, underlying diseases, clinical symptoms, X-ray examinations, the incidence of VAP, etiological agents and antimicrobial susceptibility rates were recorded. Endotracheal intubation was performed by observing the standard precautions (sterile gowns, masks, laryngoscope blades and tubes) to ensure maintaining the sterility of equipment until use. Endotracheal suctioning was performed every 8 h and in case of a need for microbiological material for the examination. Closed systems for endotracheal suctioning were used in the NICU. In the NICU, there is a standard protocol for the empirical antibiotic treatment that was followed by all neonatologists. The administration and duration of additional antibiotic treatment depended on the individual needs of the infant's clinical condition (results from microbiological testing-antibiogram). Hospital Costs The costs of the hospital stay were based on the records from the Accounting Database of the University Hospital "St. George". This accounting system includes components for direct and indirect costs, which are distributed under the standards of the Bulgarian Accountancy Act of 2015. The costs for patient-days are divided into the following categories: food (inpatient and staff); medicines (medicines, medical supplies, blood and blood products, disinfectants, hygienic materials, other medical expenses); fuels and energy (water, electricity, heat, stationery, other materials); current repairs, other external services (laboratory services); amortization, salary expenses (salaries and other remunerations and payments); insurance costs (insurance for state social premiums and health insurance premiums, premiums for senior medical staff); expenses for taxes, fees and other similar payments; other expenses. Neonatal costs per day were calculated individually for each patient according to the date of admission/discharge from the ward, duration of ventilation, date of diagnosed VAP infection and antibiotics used. The individual costs were summed in each category (VAP/non-VAP patients) to calculate the total costs on average for each month for the overall period of the study for the NICU. Our study site is a third level NICU, which provides complex care for the smallest and most premature infants until reaching a stable condition. Some of our prospectively studied (VAP/non-VAP) infants were discharged home, whereas those born prematurely were transferred to another NICU in our city for additional care until reaching the weight threshold for hospital discharge. There were 22 infants who were reported dead. Directly Attributable to VAP Differences The difference directly attributable to VAP is presented both as an absolute value and calculated as a percentage, based on the differences between the values of the analyzed variables-average hospital stay (average patient days), average treatment duration (antibiotics), average hospital costs, average hospital costs per day, average costs for antibiotics and average costs for antibiotics per day for patients with and without VAP. The initial estimations were completed in 2018 Bulgarian leva (BGN). In order to facilitate the comparisons with other studies, Bulgarian leva were converted to European currency (€) at a fixed exchange rate of 1.95583 leva to the euro (fixed rate maintained under the International Monetary Fund-led currency board arrangement, since 1999). Statistical Methods Quantitative variables were presented as the mean ± standard deviation (mean ± SD) or median (25th percentile; 75th percentile) based on the sample distribution. The variables were compared for differences using independent samples, a t-test or Mann-Whitney test, based on the normality of the distribution. The Shapiro-Wilk test was applied to inform about the distribution of the patients sampled. Qualitative variables were presented as numbers/totals and percentages (n, %), and a z-test was applied to compare two proportions. The p-values < 0.05 were considered statistically significant for all statistical tests. A statistical analysis of the data was performed using SPSS v. 26 for Windows (IBM Corp. Released 2019. IBM Corp., Armonk, NY, USA). For all tests, a p-value < 0.05 indicated the statistical significance. Demographic Characteristics of Patients VAP was diagnosed in 33 (30.8%) out of 107 patients included in the study. In two of the infants with VAP during the hospital stay, sepsis was diagnosed as a secondary NI, and in another one there was conjunctivitis as a second HAI. We lack the information about the primary diagnosis of all the ventilated patients. The distribution of VAP patients by primary diagnosis was as follows: RDS (respiratory distress syndrome) in the neonatal period-18 neonates; congenital pneumonia-5 neonates; congenital heart disease-3 neonates; extreme prematurity-4 neonates; and birth asphyxia-3 neonates. The other 74 non-VAP neonates were used as a control group. Males were 56.1% of all studied neonates, with a median age at the admission of 1 day (25th percentile-1 day; 75th percentile-1 day). Table 1 presents a comparison between patients by groups, with and without VAP, respectively. There were no statistically significant differences between VAP and non-VAP patients in terms of age, sex, APGAR score at the 1st minute and 5th minute, time of admission after birth and survival. We confirmed statistically significant differences between the median birth weight (U = 924, p = 0.045) and the average gestational age (t = 2.14, p = 0.035) of the patients in the two study groups. There was a significant proportion of children with and without VAP born prematurely (born before 37 gestation weeks), respectively-81.8% (n = 27) and 70.3% (n = 52) without statistical significance between the two groups (z = 1.3, p = 0.211). In the group of children with VAP, we observed a relatively high percentage of children born weighing <999 g (n = 9, 33.3%), and before 28 gestational weeks (n = 13, 48.1%). Etiology of VAP Microorganisms invading the respiratory tract may cause VAP. The prevailing causative agents of VAP in our study were from the Gram-negative microflora with leading microorganisms Klebsiella pneumoniae ESBLs+-27.3% (n = 18) and Acinetobacter baumannii-13.6% (n = 9) ( Table 2). In 45.5% of the patients with VAP, polymicrobial flora was isolated. Additionally, 66 blood cultures have been taken in the 33 neonates with VAP, and 2 of the patients have been diagnosed with sepsis. In the first infant, there were two positive blood cultures in which Coagulase (-) Staphylococcus was isolated. In the second infant, one positive blood culture in which Enterococcus faecium was isolated. Length of Stay (LOS) Most of the patients were admitted to the NICU in the first 24 h after birth-96.3% (n = 103). The median length of stay (LOS)/patients days for patients with VAP was 32 days (25th percentile-19 days; 75th percentile-46 days) compared to 18 days (25th percentile-11 days; 75th percentile-27 days) for patients without VAP (U = 1752, p < 0.001). The attributive hospital stay due to VAP was 14 days. For the group of patients with VAP, the median hospital stay and duration of mechanical ventilation before VAP diagnosis was 8 days (25th percentile-6.5 days; 75th percentile-10.5 days). There was a statistically significant difference between the median duration in days of mechanical ventilation between the two groups: with VAP, 12 days versus 4 days in the group of patients without VAP (U = 2068.5., p < 0.001). Lethality rates in both groups were close (z = 0.4, p = 0.688) ( Table 1). Costs The median of hospital costs for patients with VAP was estimated at €3675.77 (25th percentile-€2498.87, 75th percentile-€5146. 35), compared to the statistically significant lower expenses €2327.78 (25th percentile-€1434.10, 75th percentile-€3226.83) for patients without VAP (U = 1791.5, p < 0.001). The median cost of antibiotic therapy for patients with VAP was €432.79 (25th percentile-€282.48, 75th percentile-€994.23), compared to €351.61 (25th percentile-€212.42, 75th percentile-€587.75) for patients without VAP (U = 1556, p = 0.024). Table 3 summarizes the costs directly attributed to VAP. Initially, all of the total costs were higher in the group of non-VAP patients, where we had approximately 55% more neonates. In the next step, we calculated the average costs and estimated the directly attributed VAP difference for each. Obviously, VAP adds significant expenditures in the length of stay (patient-days), duration of antibiotic treatment and less burden on hospital costs and costs for antibiotics. Discussion The present study examines the costs associated with patients in the NICU diagnosed with VAP based on clinical diagnosis, microbiological results and X-ray examination for the first time in Bulgaria, thus avoiding the limitations of diagnosis only by clinical criteria and facilitating the identification of directly relevant costs associated with the diagnosis. The cost estimate follows the approach of numerous economic studies in the field that focus on the main determinants of cost: length of hospital stay (patient-days), total hospital costs and analysis of the cost of antibiotic therapy. In addition, our approach includes calculating the directly attributed costs in absolute value and as a percentage, resulting from VAP. The analysis demonstrated a statistically significant difference in both hospital costs and length of stay (patient-days), as well as in the costs of antibiotic therapy for patients with and without VAP. This is important because VAP is one of the most common nosocomial infections in patients in pediatric and neonatal ICU [18]. The outlined increase in the duration of hospital stays between VAP and non-VAP patients might be explained by the fact that the patients with VAP had a statistically significant difference in birth weight and average gestational age. Low birth weight and prematurity have already been confirmed as independent risk factors for VAP in a meta-analysis of observational studies [19]. Low birth weight and prematurity imply a longer hospital stay, whereas VAP as a complication additionally requires prolongation of the stay until the infection is treated. VAP is the most common indication for the initiation of empirical antibiotic therapy in the pediatric intensive care unit (PICU), accounting for nearly half of all antibiotic days [20]. The leading pathogens isolated in patients with VAP in our study were from the Gram-negative microflora, which is in accordance with previous studies [4,7]. Additionally, almost half of the VAP patients had polymicrobial flora, which can further explain the longer duration of the antibiotic treatment and the higher number of antibiotics used in this group. The balance between adequate treatment and avoiding overtreatment with antibiotics is a challenging task. Studies determining the optimal duration of antibiotic use are sparse [21]. The absolute differences between the total number of patient-days and antibioticdays, as well as between the total hospital costs and the total costs for antibiotics, show higher values of each of the variables for patients without VAP, but without considering that patients with VAP are 55.4% fewer compared to them. For this reason, the difference directly attributed to VAP between the two groups was reported by the mean values. The difference directly attributed to VAP in the average hospital stay (patient-day) is an increase by an average of 14 days (63.6%), and the costs of hospital treatment reported an average increase of €1918.00 (74.7%). Several studies conducted in the NICU and PICU [5,7,12,22,23], as well as those involving adult patients [24][25][26], reported that most of the costs associated with VAP were due to an increase in the hospital stay. In a 2-year study conducted with patients in the PICU, VAP was independently associated with increased costs by monitoring the impact of other variables in association with costs, including age, underlying disease, days of mechanical ventilation and severity of disease [21]. The results of the conducted economic analysis are in accordance with already published studies that have highlighted that VAP increases hospital stay and costs [12,27,28]. However, our data show a much larger increase in the hospital stay (63.6%) and costs (74.7%) attributed to VAP than previously reported. In addition, we observe almost identical average hospital costs per day for neonates with and without VAP (€125.78 vs. €118.08), which suggests that the increased number of patient-days is the main driver of increasing the costs. The observed longer LOS (patient-days) and days on mechanical ventilation are confirmed by other authors for neonatal and pediatric VAP patients [27,29], with a tendency to increase mortality [7,30,31], which in our case does not reach statistical significance between the two groups of patients [32]. VAP is proven to be the leading NI in the NICU, not only in our prospective study, but in the retrospective period from 2012 to 2016 as well [33]. This infection proves to be the most problematic for the studied setting and the reasons for that might be complex. This is one of the leading NICUs in the country in which neonates in very severe conditions from 8 (out of 28) districts have been managed [33,34]. Prevention of nosocomial infections, including VAP, is based on strategies to reduce the susceptibility of newborns to infections by limiting risk factors and strengthening the body's defenses. One of the most important preventive measures when it comes to VAP is early extubation, the use of a closed endotracheal suctioning system and switching to non-invasive ventilation methods, such as NCPAP. Several studies have shown a reduction in the VAP rate after guidelines' implementation into a bundle [35,36]. The power of the bundle is that it brings together several evidence-based practices that individually improve care, but when applied together, may result in an even greater improvement in the desired outcome [37]. However, our research has several limitations, including the design, which is a casecontrol without adjustment. Thus, we were not able to adjust for patients who were outborn in our analysis, because the information about external use for our study setting is not available. The data are not publicly available; therefore, we were not able to access them. It is logical that this specific group impacts the number of patient-days recorded and the potential costs and adds to the overall burden upon the healthcare system, especially the National Health Insurance Fund, but the initial expenses were calculated for different settings which are not in the focus of our study. Another limitation is that not matching the cases might not eliminate confounding; although, speaking about epidemiological case-control studies, there are records that the result was almost the same, and identical results were found irrespective of whether matching or not matching was applied [38]. In addition, this study was conducted in one hospital, but it is located in the second largest Bulgarian city and moreover, the NICU of this hospital is the only available option for the South-Central region population, which represents approximately 20.0% of the overall Bulgarian population [14]. We could consider that the "St. George" NICU's resources, staff and patient numbers are similar to the NICUs located in other five Bulgarian regions. Ideally, sampling across other NICUs in Bulgaria would have supported the case for the generalizability of the findings. Summarizing the results of an economic analysis is often a challenge, as actual costs vary across institutions based on different staff costs and different models of supply and use of pharmaceutical products. Yet, the use of costs rather than clinical pathways is preferable, as costs are considered a more reliable assessment of the financial burden, and more accurately describe institutional comparisons. We believe that our study somewhat underestimates the real costs of VAP, as its narrow perspective analyzes direct costs and does not include costs for medication. In addition, indirect costs such as the economic burden on the family due to loss of income, family break-up and costs of pain and/or disability are not included because of methodological issues and lack of information. No attempt has been made to measure the impact of functional deficits in patients with VAP. Conclusions This study is exclusively targeted at neonatology patients and can be used for a comparative analysis of data from other similar wards. This is the first attempt to estimate the economic impact of VAP in a NICU in a Bulgarian setting. VAP remains a serious and unresolved problem in pediatric and neonatal intensive care units. VAP is particularly associated with prematurity, low birth weight and prolonged mechanical ventilation. Our analysis confirms the results of other studies that the increased length of hospital stay due to VAP results in an increase in hospital costs. Funding: This research received no external funding. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Medical University Plovdiv (protocol code No. 02-2/10.04.2017). Informed Consent Statement: Informed consent was obtained from all subjects/or their guardians involved in the study.
5,453
2022-05-25T00:00:00.000
[ "Medicine", "Economics" ]
A fragmentation-based study of heavy quark production Processes involving heavy quarks are a crucial component of the LHC physics program, both by themselves and as backgrounds for Higgs physics and new physics searches. In this work, we critically reconsider the validity of the widely-adopted approximation in which heavy quarks are generated at the matrix-element level, with special emphasis on the impact of the collinear logarithms associated with final-state heavy quark and gluon splittings. Our study, based on a perturbative fragmentation-function approach, explicitly shows that neglecting the resummation of collinear logarithms may yield inaccurate predictions, in particular when observables exclusive in the heavy quark degrees of freedom are considered. Our findings motivate the use of schemes which encompass the resummation of final-state collinear logarithms. Introduction The production of heavy quarks in association with other particles at hadron colliders represents a crucial testing ground for our understanding of perturbative Quantum Chromodynamics (QCD) in the presence of several energy scales. This class of processes is governed by at least two scales, namely the heavy-quark mass m and the (invariant) mass M of the particle(s) produced along with the heavy quark. In these cases, large collinear logarithms of the ratio M m may jeopardise the convergence of the perturbative expansion of relevant theoretical predictions. Fortunately, the impact of these logarithmic contributions can be controlled by resumming them to all orders in α s , via a scheme in which the heavy quark mass m is neglected at the level of the matrix element. Such a scheme is often referred to as massless or five-flavour scheme (5FS), in case the heavy quark is identified with the bottom quark. As far as heavy quarks in the initial state are concerned, this procedure amounts to introduce a suitable parton distribution for the heavy quark. An analogous JHEP01(2020)196 procedure for heavy quarks in the final state involves the use of fragmentation functions, and is the subject of the present work. A scheme in which the heavy quark is produced at the matrix-element level and is not treated on the same footings as the light quarks is dubbed as massive scheme or four-flavour scheme (4FS). The resummation of powers of log(M/m) in a 5FS is performed by solving the evolution equations (usually referred to as Dokshitzer-Gribov-Lipatov-Altarelli-Parisi, or DGLAP, equations), at the price of discarding power corrections of O(m 2 /M 2 ), and thus of yielding less accurate theoretical predictions for the observables related to the heavy-quark degrees of freedom. In [1,2], it has been shown that, for processes in which the heavy quarks (more specifically bottom quarks) are dominantly produced via initial-state (spacelike) splittings, the theoretical predictions in 4FS are not spoiled by initial-state collinear logarithms. This is due to two main factors, one of dynamical and the other of kinematical origin. The first is that the effects of the resummation of the initial-state collinear logarithms are relevant mainly at large x and, in general, keeping only the explicit logs appearing at NLO in the 4FS is a good approximation. The second reason is that the scale which appears in the collinear logarithms turns out to be proportional to the hard scale of the process but is suppressed by universal phase space factors that, at hadron colliders, reduce the size of the logarithms for processes taking place. This result makes it not only possible, but also advisable -owing to the better perturbative description of the differential observables involving the heavy quark(s) -to employ the 4FS for the exclusive description of these processes. This has been shown explicitly to be the case in single-top production [3,4], bbH [5][6][7][8][9][10][11][12][13][14][15][16] and bbZ/γ production [16][17][18][19][20][21][22][23], and also for processes predicted by extensions of the Standard Model (SM), such as heavy charged Higgs boson production in a two-Higgs doublet model or in supersymmetry [24][25][26][27][28][29][30][31][32][33][34]. On the other hand, the calculations of the total rates in the 5FS display a faster perturbative convergence and exhibit a smaller scale uncertainty associated with missing higher orders. Methods that combine the 4F and the 5F schemes, retaining the advantages of both, are actually available, but they are generally tailored to a few specific observables. The FONLL scheme, first proposed for the transverse momentum spectrum of bottom quarks produced in hadronic collisions [35], has the advantage of being universally applicable and of allowing one to combine 4FS and 5FS calculations performed at any perturbative order. The formulation of the FONLL scheme has been extended to deep-inelastic scattering (DIS) [36] and adapted to the computation of the total cross section for Higgs and Z production in bottom-quark fusion [37][38][39]. Various recent attempts to consistently include both the resummation of initial-state collinear logarithms and mass effects also for differential and parton-shower matched observables have been recently put forward, see for example the five-flavour-massive scheme proposed in [16,40] or a similar approach based on multi-jet merging [41]. Improvements at the inclusive and at the differential level are on-going. Finally, consistent b-quark PDFs to be used in association with massive initial states have also been defined [43], thus allowing the bottom quark to be endowed with a standard PDF satisfying DGLAP evolution equations, yet treating it as massive in hard matrix elements. While initial-state collinear logarithms have been studied in details in the abovementioned literature, the situation is much less clear for processes in which final-state JHEP01(2020)196 (timelike) splittings into heavy quarks contribute significantly to the process. It has to be mentioned that, for what concerns the production of flavoured jets, higher-order corrections are generally not finite (or they are logarithmically enhanced in a massive scheme) unless dedicated jet algorithms are employed, see ref. [42]. Processes featuring final-state splittings into heavy quarks include bbW production [18,19,[44][45][46][47][48][49], the top-mediated contribution to bbH production [50], ttbb production [51][52][53][54][55][56] and multi-b final states [57,58] (mostly relevant for di-or triple-Higgs searches [59][60][61]). While the importance of the resummation of collinear logarithms has been partially investigated for Q → Qg splittings [35,62,63], no assessment of the impact of the logarithms of M m exists to date, as far as the g → QQ splittings are concerned. An interesting process which involves bottom quarks in the final state is the production of ttbb. This process is an important background to Higgs and top associated production, a unique probe of the Yukawa coupling between the Higgs scalar and top quarks, and therefore it is of great relevance for present-day analyses [64,65]. Different tools are available to simulate this process in a 4FS, including NLO QCD corrections and matching with parton showers. However, even when tuned comparisons are performed [66], the predictions obtained by different tools display rather large differences, which have a dominant impact on the systematic uncertainty in the determination of the top Yukawa coupling. 1 The work necessary to improve this situation by increasing the perturbative order of the computation is not straightforward. Given the high multiplicity and the number of scales involved in this kind of processes, the NNLO corrections in the 4FS are very hard to compute. On the other hand, NLO QCD predictions for ttbb plus one light jet have recently become available [67]. Assuming that the distortion due to parton showers is small, these calculations could help to validate the light-jet spectrum. If the resummation of collinear logarithms associated with the final-state splittings of gluons and bottom quarks is found to have a strong impact on this observable, then a matched calculation could solve the observed discrepancies. It is the purpose of the present work to make a first step in this direction. In this paper, we assess the impact of missing powers of log M m associated to final-state splittings by means of fragmentation functions (FFs). Heavy quark FFs can be computed in perturbation theory in QCD, starting from initial conditions at a reference scale µ 0 and employing the timelike DGLAP evolution equations to evolve them up to any other scale. Initial conditions for the gluon-and heavy-quark-initiated fragmentation into a heavy quark are known at order α s [62,68] and have been computed at order α 2 s [69,70], while the DGLAP evolution equation is implemented in public codes such as QCDnum [71], ffevol [72], APFEL [73] or MELA [74], up to NNLL logarithmic accuracy. The codes have been benchmarked in [74]. An approach based on FFs will enable us to study the dynamics of the bottom fragmentation in details and in an isolated environment. In particular, the importance of the resummation of potentially large logarithmic contributions can be assessed by comparing resummed predictions to the truncation of the FF at a given order in α s , and the impact of the resummation of sub-leading logarithms can be studied up to JHEP01(2020)196 NNLL accuracy. It must be stressed that, while the importance of resumming collinear logarithms in bottom-quark initiated fragmentation has been known for a long time at NLL [35,62,63], much less attention has been devoted to the role of gluon-initiated fragmentation to heavy quarks (one exception is ref. [75] in which the case of charm meson production was studied). Such a negligence was justified the past by the sub-dominant importance of this mechanism at LEP and at Tevatron, but this is no longer the case at the LHC for the g → bb splitting, and will not be the case for the g → tt splitting at future colliders. The paper is organised as follows. We review the details of timelike DGLAP evolution in section 2, where we also discuss how to truncate the evolution at a given order in α s . In section 3, we discuss the setup employed for this computation, while results are presented in section 4. In the light of our results, in section 5 we comment on how to simulate processes in which b quarks are dominantly created in final-state splittings. In section 6 we link our findings to those obtained in the context of heavy quark multiplicity estimates. We draw our conclusions in section 7, where we also discuss future outlooks of our work. In appendix A, we provide supplementary material, namely the explicit expressions of the truncated FFs up to order α 3 s together with the discussion of their numerical validation. Timelike DGLAP evolution In this section we review the formalism of scale evolution for fragmentation functions, with the main purpose of fixing notations and conventions. We also set the ground for the derivation of explicit formulae for heavy-quark fragmentation functions at fixed order up to order α 3 s , which are reported in appendix A. Strong coupling constant We adopt the following notation for the evolution of the running coupling constant α s (µ) of strong interactions: , µ 0 is a fixed reference scale, and As usual, T F = 1 2 , C A = 3 and C F = 4 3 for three-colour QCD. The number of active flavours n f will always be set to 5. We will need the expansion of α s (µ) in powers of α s (µ 0 ) truncated at O(α 3 s ), which is given by The truncated expansion of α s (µ 0 ) in terms of α s (µ) can be trivially obtained by swapping µ and µ 0 in the above equation obtaining Fragmentation functions We consider the differential cross section dσ dx for a generic process with a heavy quark Q of mass m in the final state, where x is the energy fraction carried by the heavy quark: where E Q is the energy of the heavy quark, and E the energy of the parton originating from the matrix element. Then, standard factorisation implies where dσ i dz is the partonic cross section for parton i in the final state with energy fraction z, and D i (x, µ, m) is the fragmentation function of parton i into the heavy quark. The fragmentation functions depend on the factorisation scale µ according to the evolution equations The timelike splitting functions P ij have a power expansion in α s , whose coefficients have been computed up to NNLO, and can be found in [76][77][78]. Note that, in the case of timelike evolution, we have contrary to what happens in the spacelike case. The timelike splitting functions are the same as the spacelike ones at LO, while they differ at NLO and higher. The DGLAP evolution equations are conveniently solved for Mellin-transformed quantities, because Mellin transformation turns the integro-differential DGLAP equations into ordinary differential equations. We define the Mellin transform f (N ) of a generic function f (x) by (2.9) We will use the same symbol for a function and its Mellin transform; this does not lead to confusion, as long as functional arguments are explicitly indicated. We rewrite the timelike DGLAP evolution equations as (2.11) Initial conditions We will need suitable initial conditions for the fragmentation functions. The perturbative initial conditions have been computed at order α s in refs. [63,68] and α 2 s in refs. [69,70]: Initial conditions at order α 3 s are currently unknown, and will be neglected in the following. It is interesting to notice that the initial condition for the b quark fragmentation function contains a non-logarithmic term already at NLO, namely contrary to the case of the b quark PDF in the spacelike evolution [82,83]. The initialscale gluon fragmentation function has instead only a logarithmic term that vanishes when µ 0 = m: JHEP01(2020)196 It is customary to separate singlet from non-singlet evolution in the DGLAP equations. To this purpose, we define the combinations with the valence contributions evolving according to the non-singlet (V ) timelike evolution equations and the triplet contributions evolving according to the non-singlet (+) timelike evolution equations. The evolution of the singlet combination D Σ is coupled with the gluon. The bottom quark fragmentation is given by 20) and the non-vanishing initial conditions are given by To determine D b and D g up to NNLO and their expansions up to O(α 3 s ), we need solutions of the evolution equations for both the singlet and the triplet combinations for D T 24 and D V b respectively, and d dt Note that eq. (2.26) differs from eq. (2.4) of ref. [74] only formally, because in the latter γ qg is the element of the spacelike singlet matrix evolution, that features a factor 2n f . 3 Truncated solution of DGLAP equation in Mellin space The DGLAP equations are usually solved in order to resum large logarithmic contributions; the fragmentation functions are evolved from a reference scale µ 0 to a generic scale µ through an evolution operator, which in turn is given by an expansion in powers of fixed. Here, we would like to compare such a logarithmic (resummed) expansion with a truncated one, that is, a solution expressed as a power series in α s (µ), up to a certain order. We rewrite eq. (2.10) in matrix form, and with some of the functional arguments omitted, to keep notation simple: and T is the time-ordering operator. The matrix γ has a Taylor expansion in α s , given in eq. (2.11), which starts at order α s . Therefore, it is easy to truncate the expansion of U(t, 0) to any given order. Given that we are interested in the solution up to NNLO, we keep terms up to order α 3 s in U(t, 0). We find . Next, α s (µ 1 ), α s (µ 2 ) are expanded in powers of α s (µ 0 ) as in eq. (2.3), and the integrals easily performed. Finally, α s (µ 0 ) is re-expressed in terms of α s (µ), according to eq. (2.4). In the following, we will call LO, NLO and NNLO truncated FFs, respectively the expression obtained by evolving the initial conditions of eq. (2.16) with the evolution operator in eq. (2.30), and retaining terms up to order α s (µ), α 2 s (µ) and α 3 s (µ) respectively. 4 The full expressions are reported in appendix A. Setup of the computation The results presented in this paper are obtained by means of a private computer code which links the public MELA (Mellin Evolution LibrAry) library [74]. MELA is an evolution program in Mellin space, developed specifically to provide a simple and user-friendly framework complementary to (and also serving as a validation of) the code APFEL [73], which works in x space. For the running of α s , we use the routines implemented in MELA with α s (M Z ) = 0.11856 and M Z = 91.187 GeV, that solve the renormalisation group equation for α s (µ) consistently with the DGLAP timelike equations. The charm and bottom thresholds are set to m c = 1.4142 GeV and m b = 4.7 GeV respectively. The top quark mass m t is set to infinity, so that n f = 5 at all scales. The timelike splitting functions at LO, NLO and NNLO in the N space are taken directly from MELA. Note that, due to the complexity of the expressions entering the NNLO splitting functions, MELA implements the approximate representation of ref. [85]. It was checked in [78] that, except for very small values of x, such approximate expressions deviate from the exact ones by less than one part in a thousand. The N -space solution of the timelike DGLAP evolution equation at LL, NLL and NNLL are also taken from MELA. MELA implements the analytical solutions of the DGLAP evolution equations as in PEGASUS [86], both the truncated solution and the iterated solution. In the former, the resummed solution to the DGLAP equations in N space is exact up to terms of higher orders in the perturbative expansion with respect to the order of the DGLAP solution. In the iterated solution, all orders are kept in the solution of the DGLAP equation. The N m LL solutions differ in terms of order n > m. In our case, we have verified that the effect of the resummation of collinear logarithms does not depend on the settings of the solution of the DGLAP evolution equation. The initial perturbative conditions have been implemented in our own code in the N space up to order α 2 s by numerically Mellin-transforming the x-space expressions of refs. [69,70] for real N . Di-and tri-logarithms appearing in these expressions are evaluated with Chaplin [87]. Since we are currently lacking an analytically-continued Mellin transform of the O(α 2 s ) terms of the initial conditions 5 the evolved expressions cannot be inverted to x 4 Note that D b starts at order α 0 s (µ). 5 More specifically: we succeeded to obtain analytically-continued Mellin transforms for d q , following the results and algorithms of refs. [88][89][90][91][92], (as implemented in the codes ancont and ancont1), while for d (2) g some terms are still missing. We plan to report on the complete analytic continuation of the initial conditions in a following publication. JHEP01(2020)196 space if these initial conditions are included. As their impact in Mellin space is found to be mild and rather flat in the N space, as we will show explicitly in section 4, we argue that they will not play an important role in the x space. The numerical inversion of the N -space truncated and resummed fragmentation functions from N space back to x space is performed by means of an implementation of the Mellin inversion based on the Talbot-path algorithm [93]. Matching conditions are implemented in the treatment of flavour threshold crossing in the evolution of the fragmentation functions. Impact of collinear resummation and results In this section, we present results for the resummed and truncated FFs at different mass scales, in Mellin (N ) space as well as in the physical space of the energy fraction x carried by the heavy quark. By comparing the truncated and resummed predictions for the FFs for different values of the scales µ and µ 0 and at different orders, we can: i) assess the typical size of the effects due to the resummation of final-state collinear logarithms, in particular with respect to an approximation in which only logarithms up to a given order in perturbation theory are included; ii) compare the behaviour of the bottom-quark and gluon initiated FF; iii) determine the importance of the inclusion of initial conditions at order α 2 s , computed in [69,70]. In particular, the last point and the importance of the gluon FF have been neglected so far in the literature, see e.g. [63]. We start by presenting results in N space, for the D b and D g fragmentation functions, in figures 1 and 2 respectively. The layout of the figures is the following. Shades of red (blue), from lighter and more finely dashed to darker and solid, are used for truncated (resummed) predictions of increasing perturbative orders, computed without initial conditions at order α 2 s . Symbols are used for the NLO (NLL) predictions which include the full initial conditions up to order α 2 s . Left panels show results for the FFs, while right panels show the corresponding ratios w.r.t. the NNLL prediction. In the top panels, an initial scale µ 0 = m = 4.7 GeV is employed, while in the bottom ones it is set to µ 0 = 2m. Finally, each panel shows results at four different value of the scale µ: µ = 10, 30, 100, 300 GeV, from left to right and from top to bottom. First, we inspect the behaviour of D b (N ), displayed in figure 1: in the left plots, we can see how the resummed predictions are hard to distinguish one from another, and also how the NNLO curve is close to them. This is not the case for the LO Figure 2. Same as figure 1, for the gluon fragmentation function. The N range in figure 1 is chosen to be 0 ≤ N ≤ 40 for illustrative purposes. However, this is not really consistent, because at very large values of N (which correspond to values of x close to 1) the initial conditions, computed at a fixed order in α s , get large corrections from higher order contributions due to the presence of large powers of log N in the perturbative coefficients. The effect of such large logarithms is that the Mellin transforms of the fragmentation functions in figure 1 become negative around N ∼ 20, and consequently the ratio plots display a peak in that region. This problem was pointed out in [94], where it is also argued that a resummation of large N logarithms in the initial condition would push the zero of the fragmentation functions toward much larger values of N . Even large-N resummation, however, would not make the fragmentation functions positive in the whole JHEP01(2020)196 range, due to non-perturbative effects, or equivalently due to the presence of the Landau pole in the strong coupling at very small energy scales. Perturbation theory, even in its resummed version, cannot provide a reliable description of fragmentation functions for x larger than approximately 1 − Λ QCD /µ. In the ratio plots the features described above can be appreciated with more details. In particular, it can be appreciated how the NNLO-accurate prediction starts to depart from the NNLL at large scales and large values of N , while in general, with the exception of the aforementioned spike, resummed computations show a better agreement with each other at large N . Increasing the initial scale µ 0 , reduces the differences between the resummed and truncated predictions, as it is expected and as it was studied in details in [95]. However the global pattern is unchanged. Finally, we observe how the impact of initial condition at order α 2 s is rather mild (at or below the 10% level), regardless of the scale. Turning to D g (N ), in figure 2, we observe that the truncated expressions depart very quickly from the resummed ones, and how the perturbative series wildly oscillates between negative and positive values for N already as large as 8, with large differences from the resummed curves. On the other hand, resummed curves lie rather close to each other (again, with the exception of the spike at N = 20, induced by the initial conditions of the quark FF), with differences that decrease with the scale, because of the running of α s . Differences are reduced when the initial scale is doubled, and the impact of initial conditions is mild and rather constant with N . In figures 3 and 4 the x-space results are displayed. They generally reflect the pattern of the Mellin-space results, giving a more direct feeling of the physics of the final-state splittings. Looking at D b (x), in figure 3, we appreciate how close the three resummed predictions are, for the four values of µ considered. Differences among the LL, NLL, and NNLL predictions are always within 10% and with flat ratios, with the exception of very small and very large x (x < 0.1 and x > 0.9). The first regime may only partially be accessible, since the physical regime is typically x > m µ . The behaviour in the second (large-x) regime may be improved by resumming large-x logarithms on top of the DGLAP ones [94]. As far as the truncated predictions are concerned, they generally show a harder shape than the resummed ones (more steeply peaked towards x = 1), and the hardness decreases as higher orders are included. This is consistent with the fact that higher order effects (i.e. extra radiations) soften the b quark during the fragmentation, and in the case of resummed predictions these effects are included to all orders. If we take µ = 100 GeV as a representative scale, µ 0 = 4.7 GeV, and consider the range 0.1 < x < 0.9, the NLO-truncated prediction undershoots the NNLL resummed one of -25% at small x, and overshoots it of +50% at large x. At NNLO, differences are much reduced, at the level of -10% and + 15%. The gluon-initiated FF, D g (x), on the other hand, exhibits much larger differences between the truncated and resummed predictions. The most visible feature is that the LO FFs is symmetric around x = 0.5, while all the others are not. This is directly related to the symmetry of the P qg splitting function, which is the only term at LO, as it can be seen from the first line of eq. (A.4). 6 As a consequence, the shape of the LO-truncated 6 The initial condition d (1) g is also proportional to Pqg. Figure 3. prediction does not change with the scale. Again, higher-order predictions soften the shape of the splitting function, with rather dramatic effects both going from LO to NLO and from NLO to NNLO. In section 5 we will show that these effects are dominantly due to the radiation from the parent gluon. As we did for D b (x), considering the case µ = 100 GeV, µ 0 = 4.7 GeV, it is apparent how the (N)NLO prediction exceeds the NNLL baseline by 80% (15%) at large x, and undershoots it by -20% (-5%) at small x. Comparing resummed predictions among themselves shows, again, that the effect of sub-leading logarithms is rather mild and, as anticipated by studying the behaviour in Mellin space, it is reduced when the scale µ is increased. Finally, some pathologic behaviour is visible both at small and large x. The latter can likely be cured by resumming large-x logarithms in the quark initial conditions [94], while for the former small-x resummation and coherence effects need to be considered, two ingredient which are crucial in order to obtain correct predictions for heavy-quark multiplicities [96], as we will discuss in section 6. JHEP01(2020)196 We conclude this section by discussing the dependence of truncated and resummed predictions on the initial scale µ 0 . The effect of changing the initial scale in the case of perturbatively generated bottom PDFs has been studied in details in [95]. It is interesting to compare the effects in the case of perturbative bottom fragmentation functions. This is shown in figure 5, in x space only, both for D b (left panels) and D g (right panels). In this figure we plot, for each of the truncated and resummed predictions, the ratio D p (µ 0 = 2m)/D p (µ 0 = m), for the same values of µ as before. First, we observe that at LO, no µ 0 dependence is there, neither for D b nor for D g . This can be easily understood by looking at the initial conditions in eqs. (2.17) and (2.18), and at the truncated expressions in eqs. (A.5) and (A.4): the coefficient of the anomalous dimension, in both cases, will be log For the other predictions, both truncated and resummed, we observe how the µ 0 dependence is rather mild (less than 10% for D b and 20% for D g for µ ≥ 100 GeV) for intermediate values of x and it decreases with the scale, because of the DGLAP evolution. Truncated predictions exhibit a more unstable behaviour at large x, with a divergent structure in the pathologic region where D p (µ 0 = m) vanishes, and displaying larger uncertainties for higher perturbative orders. The same behaviour, albeit with reduced µ 0 dependence, is exhibited by resummed predictions. Overall, the µ 0 dependence cannot be advocated to explain the large differences between fixed-order and resummed predictions discussed earlier in this section. JHEP01(2020)196 ∼ P gg ∼ P qq Figure 6. The g → bb splitting, dressed with extra gluon radiation. In the collinear limit, radiation off the parent gluon (red) corresponds to factors of P gg , while radiation off the quarks (blue) to factors P qq . On the simulation of processes with b quarks originated by timelike splittings The results presented in section 4, in particular those regarding the gluon-initiated FF, can provide instructive information on the dynamics of final-state g → bb splittings. As mentioned in the introduction, such a mechanism is relevant for processes such as bbW , y t -induced bbH, ttbb and multi-b production. We can schematically represent the g → bb splitting, including extra gluon emissions, as in figure 6. In that figure, the radiation off the parent gluon is shown in red, while the radiation off the originating bottom quark is shown in blue. Given the large effects observed in section 4, a natural question to ask is whether the former or the latter type of radiation play a dominant role. At least two arguments can be used to show that the largest effects originate from the radiation off the gluon. The first argument is related to color factors: in the collinear limit, each splitting from the parent gluon corresponds to a factor P gg , proportional to C A . Conversely, radiation off the quark corresponds to P qq , proportional to C F ; since C A 2C F , one expects the former effect to dominate over the latter. The second argument is that, as it is visible in figure 3, higher order effects distort the LO gluon-initiated FF towards small x, and P gg is the only splitting function which is singular in that regime. We support these arguments by explicitly showing, in figure 7, the NLO and LL predictions for D g (x) when setting P gg = 0, and comparing them to the full predictions (note that at NLO -second, third and fourth line of eq. (A.4) -the logarithmically-dominant term has either a single emission from the parent gluon, or one from the bottom quark). We choose µ = 100 GeV, µ 0 = 4.7 GeV as a representative example. We can clearly infer the importance of the emissions from the parent gluon, particularly in the case of the NLO prediction. In that case, the single emission from the quark only mildly affects the symmetry of the FF. Also in the LL-resummed case, the prediction with P gg = 0 is much closer to the LO than to the complete LL prediction. These findings bear quite important consequences for the simulation of exclusive observables or in general observables sensitive to the b-quark degrees of freedom, in particular when predictions matched to parton shower are considered. Since a parton shower radiates from external partons, if the bottom quarks appear in the hard-scattering process, only JHEP01(2020)196 the radiation off the bottom quarks (blue in figure 6) will be generated, while the radiation off the parent gluon will be included only at a given order in perturbation theory, typically NLO (with the exception of the results in [67] which may be considered partly NNLO.) This is clearly not optimal. In general, resummed predictions exhibit a better perturbative convergence with respect to finite-order calculations al ready at leading log. Hence, in regimes dominated by the splitting mechanisms, it may be more appropriate to generate b quarks by shower-evolving light partons, thus generating both kinds of radiation shown in figure 6, rather than to include them at the matrix-element level. 7 Of course, some caveats must be considered. The above statement holds in case of exclusive observables, for which larger effects are expected. More inclusive observables (typically those related to the b-jet degrees of freedom) will display smaller effects. In section 6, for example, we will show that this indeed the case for heavy quark multiplicities: the effect of the resummation of final state collinear logarithms is much milder. Furthermore, an important assumption we are making is that fixed-order computations with a timelike g → bb splitting follow the pattern of the FF at the corresponding order. This is certainly reasonable to assume, at least in those kinematic regions in which the g → bb splitting topology dominates. However other effects must be taken into account: for example, mass effects typically affect the endpoint (x → 1 and x → 0) behaviours of the FF, although not in a dramatic way. A further aspect is that the collinear approximation underlying a FF-based approach neglects the fact that the radiation recoil is spread among the other particles in the final state (see the extensive discussion in the case of ttbb in [67]). An assessment of the impact of these effects on physical observables requires a convolution of fragmentation functions with a suitable partonic cross section. This task is left for future work. JHEP01(2020)196 6 Relation with heavy quark multiplicities in gluon jets Theoretical predictions for jet multiplicities, and in particular of heavy quark multiplicities, has witnessed an important effort of the theory community during the 80's (see e.g. [96,97] and references therein). It is instructive to illustrate if, and in case how, FFs can be employed to predict such multiplicities. We start by considering the main result of ref. [96], namely the probability of a gluon with virtuality Q 2 to split into a pair of quark-antiquark with mass m: where the gluon multiplicity n g is given by 8 The exponent a has the value a = − 1 , but it can be set to zero as long as one's interest is restricted to the leading-logarithmic behaviour, which is the case we are considering in this section. The authors of [96] mention that, in order to get the correct multiplicity, coherence effects have to be accounted for in a systematic way. We will discuss the corresponding effects in the case of FFs. We proceed by expanding eq. (6.1) in powers of α s (Q 2 ) neglecting all m effects in the integrand, and by comparing such an expansion with the leading-logarithmic terms in our truncation for D g , eq. (A.4). The expansion of eq. (6.1) to order α 2 s (Q) reads: This expression should be compared with the first moment of the gluon fragmentation function, as given in eq. (A.4), expanded to second order in α s (Q). Keeping only the leading-log terms in eq. (A.4) we get where t = log Q 2 µ 2 0 , and we have used eqs. (2.12)-(2.15) for N = 1: We see that eqs. (6.3) and (6.4) actually coincide, with the choice µ 0 = 2m, apart from the term proportional to C A , which is singular as N → 1. This singularity arises from the 8 With respect to the original eq. 1.2 in [96], we have replaced log Q 2 Λ 2 by 1 b 0 αs(Q) (and similarly for log K 2 Λ 2 ). JHEP01(2020)196 small-x behaviour of the fragmentation function, which diverges as 1 x . One may regularise this singularity by restricting the integration range to x min ≤ x ≤ 1, with x min of order m 2 Q 2 . This already provides the extra power of log m 2 Q 2 which appears in the C A term in eq. (6.3), but fails to reproduce the coefficient of the C A term in eq. (6.3). A refinement of this procedure is achieved by including a kinematical constraints in the form of a x dependence of the scale argument of the fragmentation function [98,99]. This is motivated by the observation that the virtuality of a particle scales approximately as x when it decays into two bodies with energy fractions x and 1 − x, in the x → 0 limit. The corresponding integral reads which is a factor 2 larger than the corresponding term in eq. (6.3). The origin of this residual discrepancy is due to dynamic (rather than kinematic) effects related to color coherence and angular ordering, as it was shown explicitly in [100]. In general, at order α p s (Q), an extra factor 2 p−1 will appear in the most singular term (∼ C A p−1 log 2p−1 Q 2 4m 2 ) of the FF-based prediction for the heavy quark multiplicity. 9 Color coherence effects are not included in our framework, and require matching with small-x resummation in order to be fully accounted for. We conclude this section by showing, in figure 8, the comparison of the truncation of eq. (6.1) with the full result. The effect of the truncation can be appreciated both by considering the absolute multiplicity (left panel), and the ratio of the truncation up to order α s (Q), α 2 s (Q) and α 3 s (Q) over the complete result. We do not set the exponent a to zero in this case. Owing to the inclusiveness of this observable, effects are much milder than those observed in section 4: at Q = 100 GeV, the LO prediction is about 70% of the total, while the NLO one approximates the total by less than 10%. 10 This indicates how the importance of the effects described in this paper changes when inclusive observables are considered, and motivates further works assessing the impact of resummation of collinear logarithm on realistic cross sections. Conclusions and outlook One of the major obstacles to precision physics at the LHC and at future hadron colliders is currently given by our limited understanding of the associate production of heavy quarks (typically bottom quarks) and heavier objects. Because of their multi-scale nature, the description of such processes in perturbative QCD is highly non-trivial. In particular, processes where heavy quarks are dominantly produced via final-state splittings are JHEP01(2020)196 affected by the largest theoretical uncertainties, both due to missing higher orders and to parton-shower and matching systematics. In this paper, we make a step forward in the comprehension of the dynamics underlying these processes. We adopt a FF-based approach, which allows us to assess the impact of logarithms appearing at each order in perturbation theory, and to establish the importance of their resummation. By considering truncated FFs up to NNLO in QCD, we mimic the description of a fixed-order computation for processes involving the corresponding splittings at the same order. We investigated both bottom-initiated and gluon-initiated production of a bottom quark. In both cases, and particularly for the latter, a fixed-order description at LO or NLO turns out not to be adequate, and either NNLO effects must be included or collinear logarithms must be resummed to all orders in order to get reliable predictions for the bottom quark kinematics. When more inclusive observables are considered, these effects are (much) reduced, as it has been shown for the case of heavy-quark multiplicities. While the limits of a fixed-order description have been known for some time in the case of bottom-initiated splitting, which is relevant e.g. for heavy-flavour production at large transverse momenta, to the best of our knowledge this has never been investigated for the gluon-initiated heavy quark production. We have discussed in details the implications of our findings in the choice of scheme to describe this kind of processes. Our analysis motivates the effort to develop techniques aimed at combining calculations matched to parton shower, which retain the advantages of the 4F and the 5F schemes in the appropriate kinematics region, as it is currently being developed for example in [41]. Furthermore, our study outlines both a similarity and a difference between the timelike and the spacelike regimes. In particular, the evolution of a gluon from a high to a low scale (timelike) or from a low to a high one (spacelike), is associated mostly with radiating gluons, and only eventually a gluon splits into a heavy-quark pair. This is true in both cases. The main difference is JHEP01(2020)196 that, in the case of spacelike splitting, the heavy quark line enters the scattering process and therefore gluon emissions are resummed by the evolution of parton distributions even in a 4FS. On the contrary, in the case of a timelike splitting, the gluon is the particle linked with the rest of the scattering, hence no resummation of these emissions is performed. To conclude, for processes dominated by final-state g → bb splittings, the dominant contribution comes from the radiation off the parent gluon, rather than off the bottom (anti-) quark. A recent study on charm production corroborates these findings: in fact, in ref. [75] the authors comment on the importance of a proper treatment of final-state splittings and on issues related to NLO+PS simulations in a massive scheme (see figure 12 therein and the relative discussion). As a result, simulations where bottom quarks are treated at the matrix-element level might not be the most adequate to accurately describe these processes, at least in phase-space regions in which these splittings are dominant. Despite a massive scheme (or more generally a scheme where bottom quarks are generated at the hard matrix-element level) is often advocated as superior with respect to a massless one, thanks to the possibility it provides to describe the whole phase space, without cuts, we have shown that assuming the smallness of collinear logarithms, analogously to what happens with their initial-state counterpart, is not always correct and may yield serious flaws in the description of the kinematics of the b quark. This work has several natural follow-ups. First we will improve the description of FFs by including the second-order initial-conditions in the x space, and assess the impact of large-and small-x resummation. While these improvements will make our results more consistent, we do not expect them to change the final picture in any dramatic way. Most importantly, we will assess the importance of final-state collinear logarithms on a realistic process and compare a FF-based description (both using resummed and truncated results) within a NLO-accurate computation (the inclusion of FFs in NLO subtraction schemes can be found in refs. [101,102]) with a description at fixed order in QCD and possibly with data. While typical analyses for processes like W bb and ttbb study b-jet observables (for the latter, see the recent analyses [103,104]), it is not unreasonable to expect that more exclusive quantities related to the bottom-flavoured hadrons will be measured, specially with the larger statistics of the upcoming LHC runs. A Truncated expressions and validation In this appendix we give explicit expressions for the truncation of the FFs up to oder α 3 s and discuss the method we used to validate numerically our analytic expressions. A.1 Non-singlet solution Expanding both eq. (2.30) and the initial conditions up to O(α 3 s ), and omitting the N dependence everywhere and the m dependence from the coefficients of the initial conditions, we obtain the following truncated solutions: and 11 Note that the products in eq. (2.30) are products of 2 × 2 matrices. A.3 Bottom quark Summing up the singlet and non-singlet combinations according to eq. (2.20), we get where we have definedγ A.4 Validation To conclude this appendix, we discuss how the above expressions were validated in our computer code. We base our validation on two arguments: first, given an initial condition, MELA can provide the evolution up to NNLL accuracy; second, the difference ∆D p,q ≡ |D p,N q LO − D p,N q LL | α q+1 s , (A.7) where p = b, g, . . . and q = 0, 1, 2, should be of O(α s ). Thus, by changing the value of α s (m Z ), ∆D p,q must display the same scaling. We show this scaling in figure 9, for D b (left) as well as D g (right), in N space, for q = 0, 1, 2 respectively in the top, central and bottom row. We fix the scales to µ = 200 GeV, µ 0 = 20 GeV, and the bottom mass to m = 4.7 GeV (in particular, by choosing µ 0 = m, all initial conditions are non-zero). Figure 9. The scaling of the difference ∆D p,q , defined in eq. (A.7), w.r.t. α s , for p = b, g (left, right) and q = 0, 1, 2 (top, medium and bottom row). JHEP01(2020)196 Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
10,760.4
2020-01-01T00:00:00.000
[ "Physics" ]
The CUPID-Mo experiment for neutrinoless double-beta decay: performance and prospects CUPID-Mo is a bolometric experiment to search for neutrinoless double-beta decay (0 νββ ) of 100 Mo. In this article, we detail the CUPID-Mo detector concept, assem-a bly, installation in the underground laboratory in Modane in 2018, and provide results from the first datasets. The demonstrator consists of an array of 20 scintillating bolometers comprised of 100 Mo-enriched detectors are complemented by 20 thin cryogenic Ge bolometers acting as light detectors to distinguish α from γ / β events by the detection of both heat and scintillation light signals. We observe good detector uniformity, facilitating the operation of a large detector array as well as excellent energy resolution of 5.3 keV (6.5 keV) FWHM at 2615 keV, in calibration (physics) data. Based on the observed energy resolutions and light yields a separation of α particles at much better than 99.9% with equally high acceptance for γ / β events is expected for events in the region of interest for 100 Mo 0 νββ . We present limits on the crystals’ radiopurity ( ≤ 3 µ Bq/kg of 226 Ra and ≤ 2 µ Bq/kg of 232 Th). Based on these initial results we also discuss a sensitivity study for the science reach of the CUPID-Mo experiment, in particular, the ability to set the most stringent half-life limit on the 100 Mo 0 νββ decay after half a year of livetime. The achieved results show that CUPID-Mo is a successful demonstrator of the technology - developed in the framework of the LUMINEU project - selected for the CUPID experiment, a proposed follow-up of CUORE, the currently running first tonne-scale cryogenic 0 νββ experiment. detectors are complemented by 20 thin cryogenic Ge bolometers acting as light detectors to distinguish α from γ/β events by the detection of both heat and scintillation light signals. We observe good detector uniformity, facilitating the operation of a large detector array as well as excellent energy resolution of 5.3 keV (6.5 keV) FWHM at 2615 keV, in calibration (physics) data. Based on the observed energy resolutions and light yields a separation of α particles at much better than 99.9% with equally high acceptance for γ/β events is expected for events in the region of interest for 100 Mo 0νβ β . We present limits on the crystals' radiopurity (≤3 µBq/kg of 226 Ra and ≤2 µBq/kg of 232 Th). Based on these initial results we also discuss a sensitivity study for the science reach of the CUPID-Mo experiment, in particular, the ability to set the most stringent half-life limit on the 100 Mo 0νβ β decay after half a year of livetime. The achieved results show that CUPID-Mo is a successful demonstrator of the technology -developed in the framework of the LUMINEU project -selected for the CUPID experiment, a proposed follow-up of CUORE, the currently running first tonne-scale cryogenic 0νβ β experiment. Introduction Two-neutrino double-beta decay (2νβ β ) is one of the rarest processes in nature. Initially proposed by Maria Goeppert-Meyer in 1935 [1], it has since been observed for 11 nuclei with typical half-lives ranging from 10 18 to 10 24 yr [2,3]. Numerous extensions of the Standard Model predict that double-beta decay could occur without neutrino emission (e.g., see [4,5,6,7,8]). This hypothetical transition, called neutrinoless double-beta decay (0νβ β ), is a lepton-number violating process. Its signature is a peak in the electron sumenergy spectrum at the Q-value of the transition (Q β β ). Its observation could help explain the cosmological baryon number excess [9], and would prove that neutrinos are Majorana fermions (i.e., their own antiparticles) [10,5]. The current leading 0νβ β decay experiments have a sensitivity on the 0νβ β half-life of 10 25 -10 26 yr [11,12,13,14,15]. At present, there is no confirmed observational evidence for 0νβ β decay, which implies that next-generation experiments have to further increase their discovery potential by at least one order of magnitude. One of the most promising technologies for 0νβ β decay searches are cryogenic calorimeters, historically also referred to as bolometers [16]. These detectors are sensitive to the minute temperature rise induced by energy deposited in a crystal cooled to cryogenic temperatures (∼10 mK). Key benefits of bolometers are some of the best energy resolutions in the field (∆ E(FWHM)/E ∼ 0.2%), high detection efficiency and the possibility to grow radiopure crystals with a large degree of freedom in the choice of the material. Dual readout devices i.e. scintillating bolometers further allow for particle identification and thus yield the prospect of studying multiple 0νβ β decay candidate isotopes in the background free regime [17,18]. The CUORE (Cryogenic Underground Observatory for Rare Events) experiment [19,14], currently collecting data at Laboratori Nazionali del Gran Sasso (LNGS, Italy), demonstrates the feasibility of a tonne-scale detector based on this technology. The success of this experiment is the starting point of CUPID (CUORE Upgrade with Particle IDentification), which aims to increase the mass of the 0νβ β decay isotope via isotopic enrichment while decreasing the background in the region of interest. According to the CUORE background model, the dominant background in the 0νβ β decay region originates from α particles emitted by radioactive contamination of the crystals or nearby materials [20]. CUPID aims to identify and suppress this background using scintillating crystals coupled to light detectors [21]. A further background suppression can be attained by choosing 0νβ β decay emitters with a Q β β well above the 2.6 MeV line of 208 Tl, which is typically the end-point of natural γ radioactivity. Bolometers containing isotopes such as 100 Mo (Q β β = 3034.40 ± 0.17 keV [22]), 82 Se (Q β β = 2997.9 ± 0.3 keV [23]) or 116 Cd (Q β β = 2813.50 ± 0.13 keV [24]) satisfy this condition. The dual readout concept, where both the heat and light signals are recorded, has been implemented in two mediumscale CUPID demonstrators: CUPID-0, taking data at LNGS since 2017, and CUPID-Mo, which started the physics datataking at the beginning of 2019 in the Laboratoire Souterrain de Modane (LSM, France). Following the CUPID strategy, both experiments make use of enriched crystals (24 Zn 82 Se crystals for CUPID-0 [25] and 20 Li 2 100 MoO 4 crystals for CUPID-Mo) to search for 0νβ β decay. With an exposure of ∼10 kg×yr, CUPID-0 proved that dual readout could suppress the dominant α background to a negligible level, obtaining the lowest background level for a bolometric experiment to date [26]. Nevertheless, the radiopurity and energy resolution (20.05±0.34 keV FWHM at Q β β ) of the CUPID-0 crystals [26,27] do not meet the requirements of CUPID and demand further R&D activity if Zn 82 Se were to be used. Conversely, the Li 2 100 MoO 4 crystals chosen by CUPID-Mo have demonstrated excellent radiopurity and energy resolution in the tests performed within the LUMINEU (Luminescent Underground Molybdenum Investigation for NEUtrino mass and nature) experiment [28,29]. 3 The primary goal of CUPID-Mo is to demonstrate on a larger scale the reproducibility of detector performance in terms of the high energy resolution and efficient α rejection power combined with high crystal radiopurity. Given the high number of 100 Mo emitters contained in enriched crystals and the favorable 0νβ β transition probability for 100 Mo, CUPID-Mo also enables a competitive 0νβ β decay search. The present work describes the CUPID-Mo experimental setup, currently operating in the EDELWEISS-III [30,?] cryostat at LSM. The detector was constructed in the clean rooms of the Laboratoire de l'Accélélarateur Linéaire (LAL) and the Centre de Sciences Nucléaires et de Sciences de la Matiére (CSNSM, Orsay, France) in the fall of 2017 and then moved to LSM and installed in the cryostat in January 2018. The detector was successfully operated through the summer of 2018 (Commissioning I). The fall of 2018 was devoted to cryostat maintenance, after a severe cryogenic failure, and detector upgrades. After optimization of the cryogenic system and detectors over the winter of 2019 (Commissioning II), the experiment has been collecting data in a stable configuration since the end of March 2019 (Physics run). In this paper we present the CUPID-Mo detector concept and construction (Sec. 2), the operation and initial performance of the first Physics run dataset (Sec. 3), and the prospects of the experiment in 0νβ β decay search (Sec. 4). Experimental setup CUPID-Mo consists of an array of 20 scintillating bolometer modules arranged in five towers, each with four modules, as shown in Fig. 1. Each module contains one Li 2 100 MoO 4 crystal and one germanium wafer assembled inside a singlepiece copper housing, instrumented with Neutron Transmutation Doped (NTD) Ge thermistors. All the materials used for the towers' construction were carefully selected, and additionally cleaned as needed to minimize radioactive contamination. The detector construction, transportation, and assembly into the underground cryogenic facility were performed in a clean environment. The key ingredients of the detector, its assembly, and the cryogenic apparatus are detailed below. Li 2 100 MoO 4 crystals CUPID-Mo operates the four existing Li 2 100 MoO 4 crystals previously used in LUMINEU [28,29]. An additional sixteen new Li 2 100 MoO 4 crystals were fabricated with the identical procedure as that employed by LUMINEU [32,28,33]. All crystals have a cylindrical shape with ∼ 44 mm diameter and ∼ 45 mm height, and a mass of ∼ 0.2 kg. The crystals were produced at the Nikolaev Institute of Inorganic Chemistry (NIIC, Novosibirsk, Russia) as follows: purification of the ∼ 97% enriched molybdenum [32], previously used in the NEMO-3 experiment [34]; selection of lithium carbonate with low U/Th and 40 K content [28] and purified 100 Mo oxide [33]; crystal growth via a double crystallization process using the low-thermal-gradient Czochralski technique [33,28]; slicing of the scintillation elements, and treatment of their surfaces with radio-pure SiO powder. The total mass of the 20 Li 2 100 MoO 4 crystals used in CUPID-Mo is 4.158 kg, corresponding to 2.264 kg of 100 Mo. Ge slabs The high-purity Ge wafers (Umicore Electro-Optical Material, Geel, Belgium), used as absorbers for the scintillation light, have a diameter of 44.5 mm and a 175 µm thickness. A ∼70 nm SiO coating was evaporated on both sides of the Ge wafer to make them opaque, thus increasing the light collection by ∼35% [35]. A small part of the wafer surface was left uncoated to ease the gluing of a temperature sensor. Sensors CUPID-Mo employs NTD Ge thermistors [36] as thermal sensors. These thermistors were provided by the Lawrence Berkeley National Laboratory (LBNL, Berkeley, USA) and come from a single production batch. The NTDs used for the Li 2 MoO 4 (LMO) bolometers are 3.0 × 3.0 × 1.0 mm 3 in dimension, and have a temperature-dependent resistance given by R = R 0 · e (T 0 /T ) 0.5 where the average values for the 4 parameters are T 0 = 3.8 K and R 0 = 1.5 Ω . Given the lower heat capacity of the Ge absorbers for the bolometric light detector (LD), we opted to better match and reduce the heat capacity of their sensors by dicing the NTDs into multiple pieces. In Commissioning I, we produced three sensors with 3.0 × 0.8 × 0.4 mm 3 dimensions from the slicing of a single NTD in two directions. The LDs with these sensors showed an unexpectedly high noise with a strong 1/ f [Hz] component reaching frequencies up to hundreds of Hz. For Commissioning II and beyond, we replaced all but two sensors with new ones with 3.0 × 0.8 × 1.0 mm 3 dimensions, avoiding the horizontal cut of the original NTDs. In addition to the thermistor, each Li 2 100 MoO 4 crystal is instrumented with a silicon-based resistive chip [37] operated as a heater. This heater allows us to periodically inject a constant power and generate a pulse of constant energy. The resulting reference pulses can be used in the offline analysis to monitor and correct for a change of the signal gain due to temperature drifts of the bolometer [38]. Sensor coupling The NTDs were glued on the Li 2 100 MoO 4 crystals using the dedicated tool shown in Fig. 2, similar to the one used by CUPID-0 [25]. The glue is a two-component epoxy resin (Araldite R Rapid) well-tested for cryogenic applications and demonstrated to have acceptable radiopurity [20]. The gluing tool features a part for holding the NTD, and it can be moved along the vertical axis and fixed at any level. The performance of the bolometer is strongly dependent upon the quality of gluing, and we obtain optimal results when separate glue spots connect the NTD to the crystals. This helps in compensating the different thermal contractions of the involved materials. To maintain separate glue spots during and after the epoxy curing, the NTD is kept 50 µm from the Li 2 100 MoO 4 crystal, which is positioned on the top surface of the gluing tool. NTDs of five crystals (LMO-1-4,15), used in the LUMINEU experiment and/or the CUPID-Mo single tower test, were glued with six spots, while nine spots were applied for the remaining crystals. The heaters were glued with a single glue spot using a 50 µm Mylar mask to provide a gap between the crystal surface and the chip. The gluing of NTDs to the Ge wafer was also performed with the two-component epoxy resin described above. However, instead of the six or nine spot matrix, we applied a uniform veil of glue. This choice was motivated by the small size of the sensor and the less pronounced effect of thermal contraction expected for the Ge-glue-Ge interface. We used the manipulator of an ultrasonic bonding machine to provide a controlled force to attach the NTD to the Ge wafer surface and provide better reproducibility. Detector structure The CUPID-Mo single module and tower structure were designed by the Service de Physique de l'Etat Condensé (SPEC) at CEA (Commissariatà l'Énergie Atomique et auxénergies alternatives, Gif-sur-Yvette, France) according to the following requirements: the single module structure should be compact but permit the housing of four scintillating crystals in a single tower, taking into account the restricted space in the experimental set-up; the towers should be suspended by dedicated springs to mitigate the vibrational noise of the set-up [28]; the design should allow a simple installation inside the cryostat (see Fig. 1 and Sec. 2.8). The mechanical workshop of LAL (Orsay, France) fabricated the detector support structure. Each detector module (see Fig. 3) is a single-piece holder, made of highly radiopure NOSV TM copper from Aurubis (Hamburg, Germany). It contains both a Li 2 100 MoO 4 scintillation element and a Ge wafer. The bolometers are kept in place by small Polytetrafluoroethylene (PTFE) holders which decouple them from the thermal bath. The Li 2 100 MoO 4 crystals are supported with three PTFE elements on the top and bottom, while the LDs are clamped with three PTFE pieces. In Commissioning I, we did not install any reflecting foil around the crystals, because previous measurements demonstrated efficient particle identification performance despite a factor of 2 lower light collection efficiency [18]. Commissioning I was characterized by sub-optimal LD performance (see Sec. 2.3), Fig. 3 All components used to assemble a CUPID-Mo single detector module: a copper holder, a Li 2 100 MoO 4 crystal with glued NTD and heater, a Ge LD with NTD, the copper screws, the PTFE spacers and fixing elements, the Kapton foil with golden pads. Note that the reflecting film is not shown here. hence we decided to surround the crystals' lateral side with reflecting foil (3M Vikuiti TM ) in addition to the replacement of the LDs' NTDs. Detector assembly We performed all activities related to the detectors' assembly in a cleanroom environment. All the used detector components were carefully cleaned before assembly to minimize the risk of surface re-contamination. In particular, copper elements were etched with citric acid, and PTFE elements cleaned with ethanol in an ultrasonic bath. The tower assembly was performed in a class 10 cleanroom at LAL. The Li 2 MoO 4 crystal is fixed inside its copper housing with PTFE elements on top and bottom as well as surrounded by a reflecting foil. An assembled single module (top and bottom view), as well as all the CUPID-Mo detectors, are shown in Fig. 4 and Fig. 5, respectively. In total, five towers were assembled with four detectors each (Fig. 6). Crystals on the lower three floors are each viewed by one LD from the top and one LD from the bottom. The top floor crystals are each viewed by one LD from the bottom. Wiring A dedicated wiring scheme was designed and implemented for the CUPID-Mo experiment as the existing EDELWEISS-III readout could not accommodate the additional 20 dualreadout modules required for CUPID-Mo. We bonded gold wires from the NTDs to flat Kapton pads with gold contacts to provide the electrical readout connection as well as the weak thermal link to the heat bath. Silk-covered constantan twisted wires were soldered on the other side and run up each tower to a larger Kapton pad with gold contacts glued at the top of the tower. On this pad the constantan wires and copper wires (connection to the cold electronics) were soldered 1 . This connection provides a link to Si-JFET (junction gate field-effect 6 Fig. 6 The five assembled towers before installation in the cryogenic facility. transistor) based pre-amplifiers at 100 K through the copper plate inside the EDELWEISS cryostat (see Sec. 2.8). Low background cryogenic facility The CUPID-Mo detector array is installed (see Fig. 7) in the EDELWEISS-III cryogenic set-up [30,28], located in LSM. This site is among the deepest underground laboratories in the world; the 1700 m (4800 m water equivalent) rock overburden, provided by the Frejus mountain, reduces the cosmic muon flux to 5 muons/m 2 /day [39]. The EDELWEISS cryostat is a custom dilution refrigerator with a reversed geometry [30], developed by Institut Néel (Grenoble, France). During a cryogenic run, this set-up requires periodic refilling of the liquid helium (LHe) Fig. 7 The CUPID-Mo scintillating bolometer array installed inside the EDELWEISS set-up. The remainder of the experimental volume is occupied by eleven Ge-based bolometers of the EDELWEISS direct dark matter search experiment and one scintillating bolometer for CUPID R&D. bath every 10 days. The consumption of LHe is minimized by the use of a cold vapor reliquefaction system based on three Gifford-MacMahon cryocoolers. The cryocoolers are responsible for most of the vibrational noise in the set-up, thus necessitating the use of the suspension to achieve highperformance operation of the scintillating bolometers [28]. The passive shielding of the set-up against environmental radiation consists of lead (20 cm thickness) and polyethylene (55 cm thickness). The inner part of the lead shield is made of 2 cm thick low 210 Pb radioactivity (< 0.12 Bq/kg) lead recovered from sunken Roman-era galleys (hereafter called "Roman lead"). An additional internal Roman lead (14 cm) and polyethylene (10 cm) shield at the 1K-plate is used to protect the detectors from radioactivity from the cryostat components. A muon veto system is surrounding the whole cryostat providing 98% geometrical coverage. The muon veto is constructed from 46 individual plastic scintillator modules with a total surface of 100 m 2 and provides a detection efficiency of 97.7% for muons passing through a central sphere with 1 m radius [39]. The set-up is located inside a class 10000 cleanroom with a depleted radon air supply (∼30 mBq/m 3 of 222 Rn). The experimental volume of EDELWEISS-III contains four floors (detector plates) with twelve slots each (see Fig. 8). The CUPID-Mo towers were inserted through the slots T3, T4, T10, T11, and T12 (see Table 1) and mechanically decoupled from the EDELWEISS-III detector plate with three metal springs for each tower 2 . The remaining experimental space is partially occupied (tower slots T2, T5, T7, and T8) by 11 Ge bolometers for the EDELWEISS dark matter search program [40] and a cadmium tungstate based scintillating bolometer for CUPID R&D [41]. We added two mixed U/Th sources made of thorite mineral to the EDELWEISS-III automatic source deployment system [30] (see Fig. 8). These sources complement the already available γ-calibration sources of 133 Ba (∼1 kBq) and 60 Co (∼100 kBq) for periodic calibration of the CUPID-Mo detectors. The activities of the sources are ∼50 Bq of 232 Th, ∼100 Bq of 238 U, and few Bq of 235 U. The 133 Ba source emits γs with energies up to 0.4 MeV and was only used during the commissioning stage. The high-activity 60 Co γ source is used to eliminate space charges in the dual readout heat-ionisation Ge bolometers of EDELWEISS-III [30] and also to calibrate the CUPID-Mo Ge LDs via source-induced X-ray fluorescence [42,43] (see Sec. 3.3). The 60 Co source is used mainly during and just after each LHe refill (every 10 days) while a regular ∼2-days-long Th/U calibration is scheduled for each period between subsequent LHe refills. The detector readout in the EDELWEISS-III setup is based on AC-biased cold electronics [30], which restricts the use of high-resistivity thermistors to at most a few MΩ resistance at a given bias current (working point) [28]. Custom made room-temperature electronics modules called bolometer boxes (BBs) are mounted just outside of the cryostat to ensure short cables to limit noise pick-up. These BBs contain the electronics for the cold Si-JFET pre-amplifiers' biasing, Digital to Analog Converters (DACs) for the detectors' biasing, post-amplification, anti-aliasing filter, and ADCs to record the CUPID-Mo NTDs [30]. All LMOs and five LDs are operated with BBs containing 16-bit ADCs, while the signal digitization for the remaining LDs is done with 14-bit ADCs. The pulser system, used to inject a constant Joule power through the heaters, is based on a 4-channel pulse generator with a typical injection periodicity of a few minutes. The data acquisition system [30,28] can record both online triggered and stream data; the triggered data is used only for monitoring purposes. CUPID-Mo detector operation Of the 20 LMO and LD pairs, only a single LD was lost due to a hardware issue, resulting in 39 out of 40 active channels. Additionally, 18 out of 20 heaters are available to inject pulses. The optimal working point of the LMO detectors was chosen to maximize the signal amplitude. LDs, instrumented with smaller, more resistive, sensors operate in an over-biased regime to obtain an NTD resistance of ∼1 MΩ mitigating the impact of AC biasing (see details in [28]). The modulation frequency of 500 Hz was chosen to reduce the pick-up of cryocooler-induced high-frequency noise 3 . The nominal base temperature of the empty EDEL-WEISS cryostat is 11.5 mK. In the present, densely populated, cryogenic setup, an additional heat load increases this base temperature to ∼20 mK and we could stably operate at 20.7 mK with a few µW of regulation power. This temperature is considerably higher than the operating temperature in the LUMINEU predecessor [28] and it is expected to have an adverse effect on the detector performance. Nevertheless, the following analysis of a ∼ 2 week period with 11.1 days of physics data, 2.2 days of mixed Th/U source calibration, and 1.6 days of 60 Co irradiation provides a robust confirmation of the bolometric performance achieved within LUMINEU [28,?]. The data were acquired between March 24th 2019 and April 6th 2019 and correspond to a physics exposure of 0.1 kg×yr Li 2 100 MoO 4 . This early data is comparable with the prior exposure presented in [18], and emphasizes the reproducibility of Li 2 MoO 4 detectors using a total of 20 detectors. Data processing Two independent analysis frameworks, both exploiting the optimum filter technique [44], are used for the data processing: one, called DIANA [45,?], is adapted from the CUORE [14] and CUPID-0 [46] experiments and the other was developed at CSNSM [47] and used for the analysis of the LU-MINEU data [28]. The CSNSM code has been developed 8 (using the MATLAB MULTI Integrated Development Environment) specifically for the analysis of scintillating bolometer data. It is more nimble and readily adapted to different experimental setups. In contrast, DIANA is a much broader framework, including analysis packages for larger detector arrays (in particular allowing for the analysis of coincident events). It is object-oriented C++ code with a PostgreSQL [48] database interface to track detector and electronics settings. The use of DIANA allows for comparison between different CUPID project demonstrator experiments with effectively the same analysis tools, and DIANA is expected to be used as the primary package for CUPID-Mo in the future. Therefore, all results presented below are based on the use of DIANA, while it is noted that very similar results were obtained with the CSNSM code, providing a cross-check of the DIANA processing. Performance of bolometric Ge light detectors Characteristic pulse shape parameters such as the rise-and decay-times, defined as 10% to 90% of the rising edge and 90% to 30% of the trailing edge of the LD pulse shape have been investigated (see Table 2). We estimate typical (median) rise-and decay-times of 4.2 ms and 9.2 ms respectively from an averaged pulse, triggered and aligned on events recorded in an associated LMO crystal. Averaging of pulses was necessary since Li 2 MoO 4 has a moderate Relative Light Yield (RLY) which does not exceed 1 keV/MeV relative to the heat signal (see [28,29] and Sec. 3.4), and estimates from individual light pulses are subject to bias from noise fluctuations. We note that in particular for the rise-time both the 500 Hz sampling and the alignment of the average pulse become limiting factors for a more precise estimate. At a previous surface test at CSNSM with a similar temperature, working point, and a 10 kHz sampling rate, a factor of 3 faster rise-time (0.96 ms) was observed in LD 4. To estimate the performance of the Ge LDs, we perform an in situ calibration. We employ the X-ray fluorescence of Mo or Cu that is generated when the crystals and setup are exposed to a higher intensity γ source [42,43]. For Mo we expect characteristic peaks from the K α1 (17.48 keV, intensity I = 100%), K α2 (17.37 keV, I = 52%), and K β 1 (19.61 keV, I = 15%) lines [49]. The Cu X-rays can give additional peaks from K α1 (8.05 keV, I = 100%), K α2 (8.03 keV, I = 51%), and K β 1 (8.91 keV, I = 17%). Fig. 9 shows a typical X-ray spectrum obtained during the 60 Co source irradiation. The prominent features are a sum K α peak from Cu and both a sum K α and a distinct K β peak from Mo. The intensity of the Cu X-rays is much lower than those associated with Mo, as the Cu is only facing the LDs on the side. Also, the statistics in the Cu K α peak are very low for detectors far from the 60 Co source, and we chose to omit this peak from the LD calibration. Table 2 Performance of Ge light detectors of the CUPID-Mo experiment. Detectors marked with an asterisk (*) suffer from additional uncertainty due to prominent sinusoidal noise. The quoted parameters are the NTD resistance at the working point (R Work ), the rise time (τ R ), the decay time (τ D ), the voltage sensitivity (A Signal ), and the baseline noise resolution (FWHM Noise ). For the definition of the listed variables see text. With a stable operating temperature of 20.7 mK and a strong NTD polarization for the Ge LDs, negligible nonlinearity is expected. We use a Gaussian fit to the most intense peak, the Mo K α X-rays, and perform a first-order polynomial calibration with zero intercepts. The 1.4 g Ge LDs are instrumented with small-size NTDs that achieve a typical sensitivity of 1.1 µV/keV with an RMS of ∼ 40% (see Table 2). Uncertainties in the individual sen-9 sitivity estimates are dominated by the gain in the analog chain with typical uncertainties of order 10% for several of the operational amplifiers in the amplification chain. The LDs sensitivity is limited by a comparatively high regulation temperature of the detector plate and the strong NTD polarization 4 . We estimate the baseline resolution for all detectors from a set of forced random trigger events injected every 101 s. We exclude one detector instrumented with a different NTD sensor (used in Commissioning I), see Table 2 and runs with atypical noise performance, resulting in 183/209 (LD-bolometer, run) pairs. The median of these estimates yields a typical baseline resolution of 148 eV FWHM in agreement with the channel based estimate in Table 2. We see good reproducibility with individual channel estimates ranging from 66 eV up to 368 eV. A resulting scatter plot of the correlation between the sensitivity and the achieved baseline resolution is shown in Fig. 10. We note that the spread in detector performance is only slightly higher than for the NTDs on the Li 2 MoO 4 crystals (see Sec. 3.4). We want to emphasize the uniformity that lends itself to applications in larger cryogenic detector arrays. For reference, we list the performance characteristics on an individual LD basis in Table 2. The reported performance in terms of the baseline resolution exceeds the requirements to achieve a better than 99.9% rejection of α events at 99.9% acceptance of γ/β s as is discussed in detail in Sec. 3. Several improvements can be pursued for the full-size CUPID experiment. DC-biased electronics, higher sampling rate, and the implementation of additional analysis and de-noising tech-niques can improve the quoted performance. Futhermore, lower noise NTD Ge sensors and a lower operational temperature resulting in a higher detector sensitivity can also yield a significantly better LD performance, as demonstrated with a 20 eV FHWM baseline resolution in [51]. Performance of Li 2 100 MoO 4 bolometers The time constants of LMO bolometers are much longer than those of LDs. We obtain median values of 24 ms for the rise-time and 299 ms for the decay-time with a significant spread of 208 ms in the decay-times and a smaller spread of 8 ms in the rise-time (see Table 3). These values are consistent with previously reported values [28], and in the typical range for macroscopic cryogenic bolometers operated in the tens of mK range. We calibrate with a mixed Th/U source, with a most prominent peak at 2615 keV ( 208 Tl) and negligible gamma continuum, see Fig. 13. This is the closest observable γ-line, ∼ 415 keV lower than the Q β β -value of 100 Mo. The calibration data were acquired over a short period (2.2 days), resulting in limited statistics of the detected γ peaks. We neglect nonlinearities in the detector response and fit using zero and the 2615 keV 208 Tl line. Additionally, we use this peak to correct for changes in thermal gain due to slow temperature drifts in the experimental setup. The resulting correction is a linear scaling factor obtained from the optimum filter (OF) amplitude versus baseline dependence in the calibration data and is applied to both calibration and background data. The detector sensitivity at the 20.7 mK operation temperature has a median value of 17 nV/keV with an RMS of about 30% (see Table 3). For unknown reasons, the detector LMO 2 shows very low sensitivity in comparison to the results of the CUPID-Mo Commissioning I (6 nV/keV at 20.5 mK) and LUMINEU (47 nV/keV at 17 mK [42]). As in the case of LDs, larger sensitivity of LMO bolometers is expected at colder temperatures (e.g., compare results given in [28]). The same method utilized for the investigation of LDs' baseline resolution (see Sec.3.3) is also applied for Li 2 100 MoO 4 bolometers. We obtained characteristic (median) values of 1.96 keV FWHM for the baseline resolution with the spread of the distribution given in Fig. 11 and individual detector based resolutions presented in Table 3. The baseline noise versus sensitivity data is also illustrated in Fig. 12. Table 3 Performance of 100 Mo-enriched Li 2 MoO 4 bolometers of the CUPID-Mo experiment operated at 20.7 mK in the EDELWEISS set-up at LSM (France). This table contains the following information: the crystal size and mass, the NTD resistance at the working point (R Work ), the rise-time (τ R ), the decay-time (τ D ), the voltage sensitivity (A Signal ), the baseline noise resolution (FWHM Noise ), the scintillation light yield (RLY) measured by top LD (RLY Top ) and bottom LD (RLY Bottom ), and the light yield quenching for alpha particles (QF α ). The omission of a measured parameter due to lack of statistics or insufficient performance / non-operational light detector is indicated by "-". The median value for RLY Bottom is given for scintillators coupled to two LDs ( a ) and for single LD ( b ); see text. For further analysis, we utilize a preliminary set of analysis cuts. First, periods of atypical noise and temperature spikes of the cryostat are rejected, removing ∼11% of the data from the commissioning period. A large part of the loss of livetime is caused by a suboptimal setting of the cryostat suspension, and improved stability has been observed in more recent data 5 . We exclude pile-up events with another trigger in a (−1, +2) s window, require a baseline slope consistent with the typical behavior of the channel, and require both the rise-time as well as the optimum filter peak position to be within 5 median-absolute deviations (MAD) of the mean range as defined by the overall distribution of these values. We further select γ/β events by requiring events to 5 About ∼ 95% of the data are kept after the data quality selection since April 2019. have a RLY (see Sec. 3.5) within 4σ of the mean amplitude incident in a LD associated with a LMO bolometer. The resulting calibration data are presented as a summed spectrum in Fig. 13. The 2615 keV 208 Tl resolution is 5.3 keV FWHM estimated with an unbinned extended maximum likelihood (UELM) fit shown in the inset. The fit model includes a Gaussian function and two components, a smeared step function for multi-Compton events and a locally flat background. We note a potential bias on the resolution since we perform the thermal gain stabilization on this gamma peak and are in a low statistics limit. A toy Monte-Carlo (MC) with a typical value of 20 counts per detector resulted in an estimated bias (underestimate of the 208 Tl peak width) of 0.3 keV. In addition to the good energy resolution, we highlight the linearity and uniformity of the data. The maximum residual between observed peak position and expected peak position in the summed calibration spectrum was 3 keV for the 1120 keV line from 214 Bi. Similarly, we observe an excess width for all γ peaks of at most 5 keV due to not yet accounted for individual detector non-linearities. Performance of light-vs-heat dual readout We estimate the RLY from events in the 2-3 MeV region, close to the Q-value for 0νβ β of 100 Mo. We create a distribution of light/heat energies and fit a Gaussian to this distribution to obtain the RLY µ γ for γ/β events. We obtain 31 individual (LMO, LD) pairs comprised of 15 LMOs in the line of sight of two LDs (minus a failed, and an underperforming LD) and 5 LMOs with a direct line of sight to a single LD (see Sec. 2.6). The resulting RLYs (in keV/MeV) are listed in Table 3 the effect of scintillation light production in the crystal as well as light propagation to the Ge absorber, and a pattern that is dominated by the latter effect emerges. Li 2 100 MoO 4 crystals on the top of the towers with a reflective copper cap at the upper side and a single LD at the bottom, observe the highest RLYs with a median value of 0.90 keV/MeV. In addition a ∼0.1 keV/MeV difference is observed in light collection between the top (0.74 keV/MeV) and the bottom (0.64 keV/MeV) LDs. This effect is a result of a protrusion that is part of the crystal support, which acts as an aperture for downward going light. The obtained results are consistent with previous observations [28,29]. The summed light collected from two adjacent LDs is the closest estimate we have for ideal light collection. It is as high as 1.44 keV/MeV with a median value of 1.35 keV/MeV. The uncertainty for individual RLY estimates has been quantified from the spread in RLY estimates of three distinct 60 Co plus 208 Tl datasets. We observed a ∼4% spread around the mean (RMS), with a maximum deviation of 16% for a single detector. For this analysis, we opt to use the LD in the same detector module just below the crystal by default. In cases where the lower LD is unavailable or performs significantly worse (LMO 1, 3, 6 and 7) we switched to associating the upper LD to this crystal (see also Tables 2 and 3). Taking into account the measured RLY (Table 3) and the LD performance (Table 2), all detectors achieve better than 99.9% discrimination of α events (see sec. 3.6) with a typical example of the discrimination power given in Fig. 15. The preliminary γ/β selection by RLY (blue) defined before eliminates a significant population of α events with ∼20% of the RLY of γ/β events and a few remaining events at higher light yield than expected (red). This particular crystal is characterized by the highest contamination level of 210 Po with ∼ 0.5 mBq/kg and hence best exemplifies the alpha discrimination power achieved for a scintillating bolometer with typical performance values of 0.67 keV/MeV RLY and 0.18 keV FWHM Noise of a coupled LD. We observe that the 210 Po α events misreconstructed at ∼7% higher energy at 5.8 MeV instead of 5.4 MeV. This shift is much larger than nonlinearities in the γ region would suggest, but we note that a similar difference in the detector response for α particles has been observed previously with lithium molybdate based detectors [28,29]. Events at higher light yield than γ/β events can be observed due to noise spikes and misreconstructed amplitude estimates in the LD, as well as due to close β contaminations with a coincident γ depositing energy in the Li 2 100 MoO 4 crystal. We estimate a scintillation light quenching of α-particles with respect to γ/β particles of (19.7 ± 1.0)% across the detectors (see Table 3). These results are also within expectations for this scintillation material [28,29,18]. Extrapolated α discrimination of Li 2 100 MoO 4 scintillating bolometers We systematically evaluate the α discrimination level following Refs. [52,28,29] and report the discrimination of α versus γ/β events in terms of the discrimination power (DP) at the Q-value for 0νβ β in 100 Mo The parameters in the definition of the DP are the mean RLYs µ α , µ γ for α and γ/β events respectively, and resolutions σ α , σ γ . We obtain detector based values µ α = QF α · µ γ from the measured µ γ and approximate the very uniform light quenching of α events with QF α = 0.2 (see Table 13 3). The expected LD resolutions σ α and σ γ at the endpoint of the 100 Mo decay are extrapolated by adding the baseline resolution and a statistical photon noise component with an average photon energy of 2.07 eV [52] in quadrature. The resulting median discrimination power is 15.0, with the worst-performing detector having a discrimination power of 6.3. Hence all detectors are expected to achieve better than 99.9% α rejection with more than 99.9% γ/β acceptance. We note that this model calculation does not take into account additional sources of uncertainty such as variation associated with the position of the incident particle interaction and subsequent light propagation. However, the validity of the model is supported by the excellent agreement between the predicted and achieved discrimination in neutron calibration data in previous measurements [29]. The computed discrimination level exceeds the requirements for CUPID, and we plan to study adverse effects due to non-Gaussian tails with larger statistics in the future. If multiple alpha peaks emerge in individual detectors we will also be able to study the α energy scale and the energy dependence of the α discrimination from data. It should be noted that we are only using a single of the two LDs, typically the one at the bottom of each detector module (see sec. 3.4). An optimized selection of the better performing LD, or a combined light estimate using both LDs will further improve the quoted discrimination. In addition it is expected that information from the combination of the LDs could potentially be relevant to break degenerecies if non-gaussian tails related to contamination at the NTDs or the LDs were encountered. Radiopurity of Li 2 100 MoO 4 crystals We apply an additional anticoincidence cut with a time coincidence window of 100 ms between Li 2 100 MoO 4 detectors, a so called multiplicity one (M1) cut to reject multi-Compton and muon shower events and obtain the background spectrum shown in Fig. 16. The γ/β spectrum of Li 2 100 MoO 4 bolometers above ∼1 MeV is dominated by the 2ν2β decay of 100 Mo with an activity of 10 mBq/kg [28]. In 11.1 days of background data, we observe no event compatible with the RLY of γ/β events above 3034 keV, the Q-value for double-beta decay in 100 Mo. The estimate for the resolution at 2615 keV, (6.5 ± 1.4) keV, is compatible with the prediction from the calibration data, albeit this is subject to considerable uncertainty due to the limited statistics. By relaxing the pulse shape cuts, and removing the γ/β RLY cuts and the M1-cut, we can investigate the α region with a set of basic cuts that are designed to have better than 99% acceptance. The only clear contaminants seen in the spectrum (see Fig. 17) are a bulk and surface peak by 210 Po ( 210 Pb), as observed in the LUMINEU studies [28,29]. For a few of the nuclei in the Th-( 232 Th, 228 have a clean decay signature we start to see first hints of a contamination. However, the number of events in a ±30 keV window around the nominal decay energy is compatible with zero at the 2σ level for all of the decay signatures. We thus place a conservative upper limit at a level of 2 µBq/kg (Thseries) and 3 µBq/kg (U-series) (90% C.L.) on the activity in the U-/Th-chains using the largest observed event count for any of the decays in the U-/Th-chain. We look for possible backgrounds from surface contaminants in the 3-4 MeV region. Excluding a potential 190 Pt alpha bulk contribution in a ±30 keV window around 3269 keV we observe 14 events, which is equivalent to a background of (0.14 ± 0.04) counts/(keV×kg×yr) in degraded alpha events before the rejection by RLY. Outlook Based on these first physics data we are confident that the Li 2 100 MoO 4 cryogenic detectors possess a high degree of reproducibility and are well suited to scale to a much larger CUORE sized detector array in CUPID [50]. We expect the current CUPID-Mo experiment to be able to set significant limits on 0νβ β in 100 Mo. Consequently, we evaluate the CUPID-Mo sensitivity using the Bayesian method for limit setting for a counting experiment in a ±2σ region of interest around Q β β . At present, several steps of the data analysis procedure have not been fully optimized, leaving room for improvement. We expect to achieve a 5 keV energy resolution (FWHM) at Q β β with a dedicated optimization of the energy reconstruction algorithms. With an average containment efficiency for 0νβ β decay events of 75%, and assuming a ∼ 90% analysis efficiency, as obtained in CUORE [14] and CUPID-0 [27] for the combined trigger efficiency, multiplicity, pulse shape analysis (PSA) and RLY cut efficiencies we obtain the exclusion sensitivity curves reported in Fig. 18. If we demonstrate a background index of 10 −2 counts/(keV×kg×yr) with increased statistics, CUPID-Mo reaches a sensitivity superior to the most recent limit on the 100 Mo half-life set by NEMO-3 [34] in just 6 months of accumulated livetime. Fig. 18 also reports the sensitivity for the more optimistic scenario where the background level is 10 −3 counts/(keV×kg×yr); in this case, the experiment is practically background free for a total of 1 yr of livetime reaching a final sensitivity of T 0νβ β 1/2 = 2.43 × 10 24 yr. The exclusion sensitivity has a very minor dependence on the detector energy resolution and decreases by ∼10% for a factor two worse resolution and a background index of 10 −2 counts/(keV×kg×yr). Conclusion The first physics data of the CUPID-Mo experiment validates and extends the previously reported bolometric performance for cryogenic Li 2 MoO 4 crystals [28,29] on a much larger array of 20 detectors. We find that crystal growth and detector assembly can be well controlled to obtain excellent uniformity in performance and radiopurity. In particular, the summed energy resolution was 5.3 keV (6.5 keV) FWHM at 2615 keV in calibration (physics) data of 19 out of 20 detectors. The measured light yield for γ/β events (0.6-0.9 keV/MeV), the quenching of the scintillation light for α particles (20%) with respect to γ/β s and the achieved baseline resolution of bolometric Ge light detectors (146 eV FWHM) are compatible with full α to γ/β separation (median discrimination power value of 15). The Li 2 100 MoO 4 crystals also exhibit a high level of radiopurity, particularly ≤3 µBq/kg of 226 Ra and ≤2 µBq/kg of 232 Th. The results indicate the prospect to surpass the sensitivity of NEMO-3 with ∼6 months of physics data in the current demonstrator. The technology is scalable, and the first results presented in this article strengthen the choice of Li 2 100 MoO 4 as the baseline option for application in the CUPID next-generation cryogenic 0νβ β experiment. Additional data from the current and future demonstrators is essential to develop a detailed background model, investigate and optimize the performance in the region of interest for 0νβ β , and to further strengthen the projections for CUPID.
10,843
2019-09-06T00:00:00.000
[ "Physics" ]
The existence of permanent facilities for nuclear disaster medicine progresses the development of manuals regardless of the years of designation elapsed – To ensure the quality of nuclear disaster medical care, facilities are being developed worldwide in the event of a nuclear disaster. However, the relationship between the existence of permanent facilities and the presence or absence of facility operation manuals has not been clari fi ed in the fi eld of nuclear disaster medicine. This study aims to determine the relationships between the existence of permanent facilities, the presence or absence of facility operation manuals and the number of years elapsed since a facility was designated for nuclear disaster medicine. In September 2021, 26 facilities responded to an online questionnaire of the 53 facilities of nuclear disaster-related hospitals (valid response rate of 49.1%) in Japan. The existence of permanent facilities for nuclear disaster medicine was signi fi cantly higher in facilities with fewer years of designation than in those with more years of designation. The existence of permanent facilities for nuclear disaster medicine facilitated the organisational awareness of a nuclear disaster, as evidenced by the availability of manuals, regardless of the number of years elapsed since designation. In conclusion, the study suggests that the existence of permanent facilities is an important factor for organisational preparedness for a nuclear disaster. Introduction A nuclear disaster is a rare phenomenon worldwide.However, once a nuclear disaster occurs, the scale of damage is significant.To mitigate on-and off-site human suffering in a nuclear disaster, human resource development in nuclear disaster medicine is practised around the world (Cho et al., 2018;Bowen et al., 2020;Shubayr and Alashban, 2022).In Japan, more than 12 yr have passed since the Fukushima Daiichi nuclear power station accident during which the Nuclear Regulation Authority led human resource development training for nuclear disaster medicine (Tsujiguchi et al., 2019).Nuclear disaster medicine has encompassed not only human resource development but also the establishment of facilities in specific nations that provide nuclear disaster medicine (Cho et al., 2018;Marzaleh et al., 2020;Munasinghe et al., 2022).In Japan, facilities have been designated nuclear emergency core hospitals (NECHs) or advanced radiation emergency medical support centres (AREMSCs) since 2015 (Nagata et al., 2022).These facilities are intended to provide appropriate medical care to injured and sick patients, including individuals who are contaminated by or exposed to radiation, in the event of a nuclear disaster (Japan Nuclear Regulation Authority, 2022).To be designated such facilities, it is was necessary to develop 'soft' aspects-such as medical functions and specialised staffingand 'hard' aspects-such as facilities, equipment, medical materials and equipment and radiationmeasuring equipment (Supplementary Tab.1; Japan Nuclear Regulation Authority, 2022). The development of software and hardware attributes is important for ensuring that medical facilities can attend to several types of patients during general disasters (Marzaleh et al., 2020;Munasinghe et al., 2022).With regard to hardware, special spaces such as initial treatment and decontamination rooms, in particular, must be developed by medical institutions to receive patients who are injured by radioactive materials (Marzaleh et al., 2020;Munasinghe et al., 2022).Simply installing such hardware has been insufficient.In the past, medical staff were required to prepare manuals that described policies, protocols and procedures to support the use of their facilities' 'hard' aspects (Kutsch, 1956;Shapiro, 1957).The preparation of such manuals is associated with an organisational awareness to utilise the health facility system (Sulzbach and Stivale, 1990).Manuals on facility utilisation are essential (Marzaleh et al., 2020;Munasinghe et al., 2022) and can help medical staff effectively use their facilities (Sulzbach and Stivale, 1990).Therefore, the development of manuals with key information-usage of the facilities, preparation to receive contaminated patients and provide medical care, and establishment of staff roles in nuclear disaster medicine-is associated with the implementation of effective nuclear disaster medicine.As shown in Supplementary Table 1, regardless of the availability of manuals on the use of their facilities, designation as a NECH or an AREMSC is possible if institutions have facilities such as an initial treatment or decontamination room, even if these spaces for receiving contaminated injured patients are temporary.Although more than seven years have elapsed since the designation of facilities for nuclear disaster medicine in Japan, the relationship between the existence of permanent facilities (defined as facilities possessing the relevant hardware in this study) and the presence or absence of manuals for operating the facilities has not been clarified. Against this backdrop, this study clarifies the relationships between the existence of permanent facilities (i.e., with the relevant hardware), the presence or absence of manuals related to a nuclear disaster, and the years that have elapsed since the designation of the facilities for nuclear emergency medicine.The results of this study can improve medical staff's awareness of nuclear disaster preparedness specific to the usage of facilities and can also contribute to standardising the level of medical care provided to contaminated injured patients. Questionnaire survey process This cross-sectional study was approved by the ethics committee of Fukushima Medical University (approval number: 2019-417) and used a questionnaire for facilities.The questionnaire survey was targeted at 53 NECHs and AREMSCs (hereafter collectively named nuclear disasterrelated hospitals; NDRHs) in Japan.Conducted between 1 September and 30 September 2021, the study's questionnaire survey process was as follows: 1) questionnaire survey guidelines and questionnaire items were sent by post to the departments of each facility; 2) department officers in each facility accessed the URL in the guidelines using their PCs and entered responses to the questionnaire items online; and 3) online responses to the questionnaire items were collected. Questionnaire items and analytical methods The questionnaire survey was based on the characteristics of the target medical facilities and was limited to some of the designation requirements for NDRHs (Supplementary Tab. 1).This primary survey focused on three elements: 1) the years that had elapsed since designation as an NDRH; 2) the availability of manuals on nuclear disasters; and 3) the existence of a permanent hardware facility in the NDRH. The questionnaire queried the following five characteristics of the responding facilities: 1) the annual number of medical personnel that attended nuclear disaster medicine training seminars as of September 2021 per facility (less than 50 or over 50; this number was estimated using the number of seminars per year and the number of NDRHs); 2) the regional classification of NDRHs in Japan (East versus West Japan; the classification was according to the region in which the four Nuclear Emergency Medical Support Centres are located (Nagata et al., 2022); 3) the average number of external patients per day at a target facility (less than 1,100 or over 1,100; the number was derived from the average daily number of external patients in a hospital's facilities in Japan in 2019); 4) the years elapsed since designation as an NDRH (less than four years or over four years; the threshold was set to four years given that the average number of years since designation was 3.90 yr as of September 2021); and 5) the availability of manuals on nuclear disasters at the target facility (Yes or No; Yes indicated that the target facility possessed a manual on nuclear disasters).In this context, the availability of nuclear Table 1.The characteristics of facilities that responded to the questionnaire and the existence of permanent facilities.Four items in the questionnaire (see Supplementary Tab. 1) addressed the availability of permanent facilities: 1) a dedicated emergency room for contaminated patients ('Yes' or 'No'; 'Yes' if a room was permanently dedicated to receiving contaminated injured patients in a nuclear disaster and 'No' if such a room was only temporarily available); 2)a dedicated indoor space for the examination of body surface contamination ('Yes' or 'No'; 'Yes' if a facility existed that could prevent the spread of radioactive materials that contributed to body surface contamination and 'No' if such a facility was unavailable); 3) storage facilities for radioactive waste ('Yes' or ' No' ; ' Yes' if a facility permanently existed that could store waste containing radioactive material contamination and ' No' if such a facility was only temporarily available); and 4) a water storage tank for storing radioactively contaminated water ('Yes' or 'No'; 'Yes' if a facility permanently existed for a water storage tank and 'No' if such a facility was only temporarily available). Twenty-six facilities responded to the study questionnaire.The valid response rate for the study was 49.1%.The following methods were used to analyse the responses to the questionnaire items: 1) analysis of the existence of permanent hardware facilities in NDRHs with the number of years elapsed since designation as an NDRH and the availability of manuals on nuclear disaster; 2) comparison across the four items of the trends in the existence of permanent hardware facilities in a 2 Â 2 matrix about relationship between the years elapsed since designation as an NDRH and the availability of manuals on nuclear disaster.The analyses were 2 Â 2 two-tailed Fisher's exact tests using the statistical analysis software JMP14.3 (JMP Statistical Discovery LLC, Cary, NC, USA).The significance level for the statistical analyses was set at 5%. Results Table 1 shows that facilities that had been designated as an NDRH for over four years were significantly more likely to have over 50 total staff (p = 0.015) who had attended nuclear disaster medicine training.Moreover, permanent hardware facilities that had been designated as an NDRH for over four years had significantly fewer dedicated emergency rooms for contaminated patients (p = 0.048 using the one-tailed test), dedicated indoor spaces for body surface contamination examination (p = 0.036) and storage facilities for radioactive waste (p = 0.016) than facilities that had been designated as an NDRH for less than four years.Conversely, the availability of manuals on nuclear disasters was significantly more likely when two items specific to permanent facilities (dedicated indoor space for body surface contamination examination [p = 0.038] and storage facilities for radioactively contaminated water [p = 0.038]) than if these items were temporarily available (Tab. 1).No association was observed between the years elapsed since designation as an NDRH and the availability of manuals on nuclear disasters. Figure 1 shows the matrix of the years that had elapsed since designation as an NDRH and the availability of manuals on nuclear disasters for the four items specific to the existence of permanent facilities.We performed a 2 Â 2 Fisher's exact test using these data and observed no significant differences in all cases.In the group of facilities which had been designated as an NDRH for over four years and did not have manuals on nuclear disaster, the proportion of non-permanent facilities ranged from 67% to 83% (Fig. 1; right and top).Facilities of 36-55% that had been designated as an NDRH for over four years and in which manuals on nuclear disasters were available were permanent (Fig. 1; right and bottom).The results indicated that the lack of progress in the development of manuals occurred in temporary facilities, despite the long time that had elapsed since their designation as an NDRH.In contrast, 86% to 100% of facilities which had been designated as an NDRH for less than four years and had manuals on nuclear disasters were permanent (Fig. 1; left and bottom).This group was characterised by the fact that the facilities were permanent and the manuals were well developed, despite the short time since designation as an NDRH.The small sample size in the group with less than four years since designation and no manuals on nuclear disasters (Fig. 1; left and top) was attributable to the short time since designation and indicated the urgent need for manual development. Discussion Enhancing medical facilities to ensure the treatment of radionuclide-contaminated patients in a nuclear disaster is important.The study examined the requirements for establishing permanent medical facilities for a nuclear disaster.The results presented in Figure 1 supported an unexpected relationship between the years elapsed since designation as an NDRH and the existence of a permanent facility.Specifically, the study showed that the development of facilities for nuclear disaster medicine was driven by a strong sense of mission and social factors and was not influenced by the passage of time. The study indicated that the availability of manuals related to nuclear disasters was associated with the existence of permanent facilities regardless of the passage of time.Table 1 shows a twisting trend between the number of years elapsed since designation as an NDRH and the existence of permanent facilities.Two explanations were provided for the results related to the existence of permanent facilities (Tab.1).First, facilities for which four years had elapsed since designation as an NDRH may have had a medical staff with a strong sense of mission to provide nuclear disaster medicine without financial support at the time of facility designation.These facilities may have been designated to fulfil minimal requirements (see Supplementary Tab. 1).Second, we presumed that the existence of facilities with less than four years since designation as an NDRH was designated with the financial support of the Cabinet Office of the Japanese government and other authorities for hardware development.Particularly after 2020, the overlapping impact of coronavirus disease 2019 infections with the enhancement of permanent facilities for nuclear disaster medicine highlighted the important role of healthcare facility hardware in a worldwide all-hazards approach to natural disasters and terrorism using chemical or biological attacks (Marzaleh et al., 2020;Munasinghe et al., 2022).This complex combination of disaster factors is considered to have reflected the designation of facilities as NDRHs in Japan.Therefore, this study indicates a case in which the development of medical facilities was driven by social factors. The permanent establishment of nuclear disaster medical facilities may have further influenced attitudes towards nuclear disaster preparedness, including ensuring the availability of manuals at such facilities.The results in Table 1 and Figure 1 illustrate the relationships between the permanent establishment of nuclear disaster medical facilities, the availability of manuals on nuclear disasters and the years that had elapsed since designation as an NDRH.We hypothesised that facilities for which more years had elapsed since designation as an NDRH were more likely to possess manuals on nuclear disaster. However, this relationship was shown to be confounded and was dependent on four items specific to the existence of a permanent facility.This result was expected because when items specific to the existence of permanent facilities are present, medical staff at the relevant facility have no choice but to prepare manuals to utilise the relevant hardware, even when the elapsed time since designation as an NDRH is short.Manuals allow healthcare facilities to compile procedures and policies that guide specific actions and disseminate the information to medical staff (Kutsch, 1956;Shapiro, 1957;Sulzbach and Stivale, 1990).Therefore, permanent facilities for nuclear disaster medicine must ensure the development of a manual that guides the entire medical staff in the use of the facilities and increases the facility's nuclear disaster preparedness.Conversely, manuals are not developed when facilities are temporary, even if a long time has elapsed since the designation of the facilities as NDRHs.We presume that this finding is attributable to the difficulty among medical staff in developing manuals for temporary facilities.Therefore, we assert that the establishment of permanent medical facilities for nuclear disaster medicine is important for ensuring the availability of manuals and that such permanent facilities improve the medical staff's awareness of and preparedness for a nuclear disaster. Finally, the nuclear disaster manuals prepared by facilities should be improved by incorporating multiple perspectives, as described below.For example, the availability of a manual does not guarantee that medical staff will be able to utilise the manual when necessary.To ensure effective utilisation of the manual, medical staff should receive regular radiation-focused education and nuclear disaster training (Cho et al., 2018;Shubayr and Alashban, 2022).Therefore, in addition to facility improvement efforts, personnel training and commitment to manage nuclear disasters should be consistent among all facilities nationwide (Bourguignon, 2022).Furthermore, nuclear disaster manuals must be flexible.Indeed, although domestic NDRHs are intended to accommodate exposed and contaminated individuals during a nuclear disaster, they will also realistically provide medical care for radiation workers or victims of nuclear terrorism with high levels of external and internal exposure (Munasinghe et al., 2022).Therefore, NDRHs should improve their manuals to manage multiple types of radiation emergencies. The study had some limitations.First, the study did not include an investigation of 'soft' aspects, such as the preparation of medical equipment and logistics flowsfeatures that are considered important from a global perspective (Marzaleh et al., 2020;Munasinghe et al., 2022).However, as shown in Supplementary Tab. 1, the NDRHs functioned as core hospitals for general disasters, and the flow of medical materials, equipment and logistics ensured preparedness (Japan Nuclear Regulation Authority, 2022).Additionally, the availability of materials and equipment related to nuclear disaster medicine was included as a requirement for designation (Supplementary Tab.1); therefore, we considered that the NDRHs already had materials and equipment related to nuclear disaster medicine (Japan Nuclear Regulation Authority, 2022).Second, the study used a questionnaire survey which queried the existence of permanent facilities; therefore, the sample size of responses was small compared with that for a typical questionnaire survey of individuals.As a result, we were unable to perform advanced statistical analyses, such as logistic regression analysis.However, sample sizes in previous studies that included surveys of targeted facilities were similar to that in our study (Munasinghe et al., 2022).Furthermore, given this pilot study's aim of examining the availability of manuals on nuclear disasters, the existence of permanent facilities and the years that elapsed since designation as an NDRH, the study results are sufficiently novel even without the use of advanced statistical analysis methods.Finally, the development of nuclear emergency core hospitals is still ongoing in Japan.Therefore, based on the results of this study, we plan to conduct a full-scale survey of the awareness of nuclear disaster preparedness in each facility in 2024 or beyond.The expected results will contribute to standardising the level of medical care that is provided to contaminated injured patients across Japan. Conclusions Regardless of the number of years elapsed since designation as an NDRH, the existence of a permanent facility was relevant to the availability of manuals on nuclear disasters in medical establishments.We speculate that when a facility that provides nuclear disaster medicine is permanently present, the awareness of nuclear disaster preparedness increases at the facility, and the medical staff may be more motivated and engaged in the preparation of manuals that guide the utilisation of the hardware.Therefore, medical facilities that prepare for nuclear disaster must not only strengthen 'soft' aspects such as medical staff training but also ensure that the facilities are permanent.Strengthening both software and hardware aspects will make clear the level of national standard for medical care that should be provided to radioactively contaminated injured patients. N Years elapsed since designation as an Fig. 1 . Fig. 1.The number of years elapsed since designation as an NDRH and the availability of manuals on nuclear disasters among each facility.NDRH: Nuclear Disaster Related Hospitals.
4,345.6
2024-04-01T00:00:00.000
[ "Medicine", "Environmental Science", "Engineering" ]
The Complexity of a Dengue Vaccine: A Review of the Human Antibody Response Dengue is the most prevalent mosquito-borne viral disease worldwide. Yet, there are no vaccines or specific antivirals available to prevent or treat the disease. Several dengue vaccines are currently in clinical or preclinical stages. The most advanced vaccine is the chimeric tetravalent CYD-TDV vaccine of Sanofi Pasteur. This vaccine has recently cleared Phase III, and efficacy results have been published. Excellent tetravalent seroconversion was seen, yet the protective efficacy against infection was surprisingly low. Here, we will describe the complicating factors involved in the generation of a safe and efficacious dengue vaccine. Furthermore, we will discuss the human antibody responses during infection, including the epitopes targeted in humans. Also, we will discuss the current understanding of the assays used to evaluate antibody response. We hope this review will aid future dengue vaccine development as well as fundamental research related to the phenomenon of antibody-dependent enhancement of dengue virus infection. Infection with a flavivirus can cause a wide range of clinically overt symptoms [1,2], potentially resulting in death. For example, JEV is the leading cause of viral encephalitis in Asia, with a 30%-40% case fatality rate [2]. Dengue is the most common arthropod-borne viral infection occurring worldwide, with an estimated 360 million infections and 96 million symptomatic cases in 2010 [3]. On average, 500,000-1 million individuals develop severe disease, including hemorrhage and plasma leakage, resulting in 25,000 deaths [4]. Currently, there are vaccines available for YFV, TBEV, and JEV. Yet, there is no vaccine available for the closely related DENV [5]. This is in part due to the existence of four genetically and antigenically distinct DENV serotypes (Fig 1). There is approximately 40% divergence between the amino acid sequences of the serotypes (Fig 1) [6,7] and up to 9% mismatch within a serotype (Fig 1) [8]. The diversity of the genotypes of JEV, WNV, and TBEV is much less, with 4.1%, 2%, and 5.6% difference, respectively [9,10]; therefore, no distinct serotypes exist. Another factor for the complexity of the DENV vaccine lies in the severity of disease. All four DENV serotypes can cause symptoms ranging from acute febrile illness to severe manifestations as hemorrhage or organ impairment. Severe disease is most often seen during secondary, heterotypic reinfections [11,12]. The incidence of severe disease during secondary, heterologous infection relative to primary infection can be 20-fold to 80-fold higher [12][13][14][15]. The observation that disease can be more severe during secondary infections severely hampered the development of a vaccine, as it implies the need to simultaneously induce immunity to all four existing DENV serotypes over a prolonged period [16,17]. Multiple vaccine formulations are currently being tested in preclinical and clinical stages, and these have been reviewed before [18]. Here, we will focus on the Sanofi Pasteur live The phylogenetic tree is based on the amino acid sequence of the envelope glycoproteins. The methodology and National Center for Biotechnology Information (NCBI) IDs of all used genotypes for the flaviviruses and dengue viruses are provided in S1 Dataset. The table denominates the percentage of consensus between the serotypes based on the envelope amino acid sequences. Sequence identities were calculated using the Sequence Identity and Similarity (SIAS) calculator (http://imed.med.ucm.es/ Tools/sias.html). Scale bar of 0.1 (flaviviruses) or 10 (dengue virus) denotes 0.1 or 10 (silent) substitutions per amino acid for the flavivirus and dengue sequences, respectively. doi:10.1371/journal.pntd.0003749.g001 attenuated vaccine since this is the most advanced vaccine with known efficacy results. The results of the trials will be reviewed and discussed within the context of the host immune response and the assays used to understand and evaluate both the vaccine and the host immune response. Sanofi Trials Sanofi Pasteur developed a tetravalent chimeric YFV/DENV vaccine (CYD-TDV). The vaccine was based on the backbone of the attenuated YFV strain 17D in which the structural genes encoding for the premembrane (prM) and envelope (E) proteins of YFV were replaced with those of DENV [19]. YFV/DENV chimeric viruses were made from all four DENV serotypes. The resulting viruses thus have the attenuated replication machinery of YFV and the outer structure of a DENV serotype. Hence, the vaccine induces CD4 + T cell and antibody responses against the DENV structural proteins and CD8 + T cell responses against the YFV nonstructural (NS) proteins [20][21][22]. Preclinical in vitro assays showed genomic stability and no toxicity (reviewed in [19]) and induction of antiviral responses in human dendritic cells [23]. Subsequently, clinical studies were performed using a three-dose regimen containing 10 5 CCID 50 of each YFV/DENV chimeric virus. The Phase I and II trials showed that the vaccine is safe and tolerable in humans [19,24], which was the primary end point. Additionally, the authors of the Phase II trials also determined the seroconversion and the efficacy against virologically confirmed DENV. In one study, excellent tetravalent seroconversion against DENV was noted, as 95%-100% of the individuals seroconverted [25]. Yet, in the same study, the efficacy was surprisingly low, being 30%, whilst another study reported near 64% efficacy (Table 1). These Phase II trials were conducted with relatively low numbers of participants. Next, large Phase III trials were conducted in Asia and Latin America to determine the efficacy of the vaccine. However, the recent reports of these trials were quite enigmatic. The Phase III studies in Southeast Asia and South America reported an efficacy range of 51.1%-79% and 31.3%-77.5%, respectively. Overall, the vaccine was shown to be efficacious, as the 95% CI was higher than 25% (primary end point). It should be noted, however, that the reported efficacies varied per country and per study. Additionally, when the serotype specific efficacy was calculated, the lowest efficacy was consistently seen for DENV2 (Table 1). Strikingly, the vaccine cohort had significantly lower incidence of dengue hemorrhagic fever (80%-90% efficacy) and hospitalization (67%-80% efficacy) [27,28]. Baseline immunity seems to be beneficial in terms of developing tetravalent seroconversion and overall efficacy against symptomatic DENV (Table 1). While the protection against hemorrhagic fever is encouraging, these trials also taught us that seroconversion alone does not predict protective efficacy. Clearly, more research is required to identify the correlate of protection [29]. Furthermore, it showed us that we need to have a better understanding of the immune response to DENV infection. Hence, below we will discuss what is known about the function of T and B cells in immunity against DENV. Most attention has been directed towards the role of antibodies in immunity against DENV, and therefore, these will be the primary focus of this review. Human Immune Response and Disease After a primary DENV infection, individuals are protected against disease upon reinfection with the homologous serotype. Cross-protection against other serotypes is limited and exists only for 1-2 months post-primary infection, while disease severity was found to be alleviated for 2-9 months thereafter [30,31]. Recent information suggests that cross-protection against severe disease lasts up to 2 years [32][33][34][35]. Intriguingly, after the cross-protective period, individuals are at risk of developing more severe dengue upon secondary infection with a heterotypic serotype. Moreover, the chance to develop severe disease increases with the time between the primary and the secondary infection [33,34]. The increased chance of severe disease can be explained by original antigenic sin, a phenomenon in which the human immune system preferentially activates memory T and B cells against the original antigen rather than instructing naïve T and B cells against the current antigen [36,37]. Indeed, it was found that upon a secondary heterotypic DENV infection, the acute T cell response is mostly directed towards the previous infecting serotype [38,39]. Over time, the T cells against conserved, cross-reactive epitopes are preferentially expanded, resulting in a DENV-broad [20,38,40] and potentially flavivirus-broad response [39,41]. As for B cells, a predominant monotypic response with high avidity against the infecting serotype is observed 6-9 days after disease onset [42,43]. Yet, within 6 months of infection, a broad cross-reactive B cell repertoire is seen [43]. Indeed, cross-reactive B cells are predominantly present at the time of secondary infection [42]. These cells have been speculated to contribute to enhanced severity of dengue disease severity [44] (discussed below). After a secondary heterotypic infection, stable populations of DENV-broad cross-reactive B cells are seen [42,43], and these cells secrete high levels of high-avidity antibodies [42,45,46]. Antibodies are suggested to be more important than T cells in triggering the onset of severe disease. This was suggested because infants born to dengue immune mothers were noted to have a higher risk for severe disease development during primary infection [47]. Halstead and others found that waning antibody titers can enhance DENV infectivity in vitro and in vivo [48][49][50] and developed the theory of antibody-dependent enhancement (ADE) of disease [48,51]. During ADE, the pre-existing cross-reactive antibodies bind to the newly infecting DENV serotype and specifically target the immune complexes to Fc-receptor-expressing cells, cells that are highly permissive to DENV. The high viral burden triggers the immune system, which at the end is responsible for the onset of severe signs like plasma leakage [51][52][53]. Thus, in case of dengue, antibodies have a paradoxical role: antibodies induced during a primary infection are believed to confer lifelong protection against the infecting serotype, whereas upon reinfection with another DENV serotype, these antibodies can contribute to severe disease development. Hence, we wished to gather information on the human antibody epitopes and their relative contributions to the human antibody repertoire after DENV vaccination and infection. Although we primarily focus on antibody epitopes, we also included a brief description of the role of T cells in connection with the CYD vaccine. Human Antibody Responses We first reviewed the antibody responses in the sera of primary and secondary DENV cases (S1 Table). The majority of antibodies are raised against the E protein, and a small fraction target the prM and the NS proteins. This is not very surprising as E and prM are exposed on the viral surface and soluble NS1 is secreted by infected cells [54]. The higher fraction of E protein antibodies suggests that the human antibody response predominantly targets DENV particles (structural proteins) rather than NS1-positive cells, i.e., infected cells or cells having bound soluble NS1 [55,56]. Interestingly, we see that during secondary infection the antibody repertoire broadens as higher responses against the prM and NS1 proteins are seen. This implies that antibodies against E, prM, and NS1 are differentially induced between primary and secondary infection (discussed further below). A detailed insight in the specific antibody repertoire may therefore help us to better understand the contribution of distinct epitopes to infection neutralization. Indeed, several elegant studies have used immortalized B cells from human blood samples to generate monoclonal antibodies of these cultures. Unfortunately, the studies conducted so far show considerable variability in numbers and epitopes of antibodies isolated from individual patients (S2 Table). This is likely due to differences in donor backgrounds and immortalizing method used. Therefore, we next focused on those studies in which primary and secondary antibody responses or acute and convalescent samples are compared ( Table 2). Even then, the results are highly variable: e.g., the prM response strongly expands in two studies but decreased in one study. The latter study also showed a stable E response between primary and secondary responses, while the others reported a reduction thereof. Yet, when we looked at both sera and monoclonals (S1 and S2 Tables), overall, the E antibodies are dominant during the primary response. The results for secondary responses are more variable (Table 2), but in sera prM and NS antibodies are particularly detected in secondary cases (S1 Table). Furthermore, since binding of one epitope can enhance or diminish binding of antibodies against other epitopes [60][61][62], it would be interesting to see whether shifts in these ratios influence neutralization of DENV particles by antibodies against specific epitopes. Based on the tables, we tried to estimate the balance between the various targeted epitopes. For primary convalescent sera, a ratio of approximately 3 E antibodies to 1 prM antibody was found. In secondary convalescent cases, this was near 1 on 1. Furthermore, the E protein consists of three ectodomains (D): E DI-DIII. In humans, DI and DII are immunodominant domains relative to DIII, as 3-fold more antibodies target DI/III than DIII. However, given the large variability, more studies are required to validate the results. Although a significant proportion of antibodies target the NS proteins, DNA-vaccine trials suggest that these are not pivotal for neutralization of infection [63,64]. Yet, the NS1 antibodies may aid in clearance of infected cells [65]. Here, we will focus on the antibodies that directly bind to the virus and discuss the clinical relevance of these antibodies. PrM Antibodies We and others showed that prM antibodies are poorly neutralizing and highly enhancing [66][67][68][69][70]. Moreover, infection enhancement was seen over a broad range of concentrations, whereas neutralization occurred in a very narrow range and is incomplete [67][68][69][70]. Therefore, prM antibodies have been postulated to contribute primarily to antibody-dependent enhancement of dengue infection and severe disease development. Recent analysis, however, showed that although there is a robust prM response (20%-30%) during acute secondary DENV2 infection, there is no difference in the level of prM antibodies between mild and severe cases [71]. Furthermore, prM antibody levels are increased during secondary, tertiary, and quaternary infections (Table 2, S2 Table, and references therein), whereas severe disease is most often associated with secondary infection [72]. Indeed, subsequent functional analysis did not show a specific correlation between the neutralization/enhancement profile of the sera towards prMcontaining particles and the onset of severe disease [71]. This suggests that prM antibodies are not a discriminating factor but act as a cofactor in disease development. Yet, given the weakly neutralizing properties of prM antibodies, it is advisable to avoid the presence of prM in vaccines. E Antibodies Many studies have been done to link neutralization to certain epitopes or structural domains of the E protein ( Table 2). Most of the antibodies were found to be directed against dengue EDII fusion loop (FL) (Table 2, S1 Table, and references therein). Furthermore, Lai and colleagues found a correlation between serum EDII FL antibodies and the potency of the serum to neutralize heterotypic DENV [46]. The relevance of these human EDII FL antibodies in protection was further strengthened by elegant tests using prM-E proteins or virus-like particles bearing mutations in the FL [46,73,74]. Based on mouse models, the EDIII was initially considered a major antigen for the induction of serotype-specific neutralizing antibodies [75,76]. Surprisingly, quite low fractions of antibodies targeting EDIII were found during human infection [37,77], and similar low fractions were found after infection with other flaviviruses [78][79][80]. Moreover, depletion of EDIII-reactive antibodies showed that these are not absolutely required for neutralization [37,78,81,82]. This suggests that the neutralization potency is predominantly facilitated by antibodies against EDI, DII, and the FL. However, and importantly, some monoclonal antibodies could not bind to monomers of E or prM but still bound the whole virion [57,58,68,81,83]. These antibodies may interact with quaternary structures [83][84][85] and effectively freeze the virus particle as it inhibits changes within the E protein that are required for fusion. An example of such quaternary structure is the EDI/DII hinge region, and recently, antibodies targeting this region were found to be serotype-specific and neutralizing [69,84,85]. Antibodies that bind to viral particles but not to protein monomers are potently neutralizing [58,69,83] but appear to be rare [66]. A recent report, however, showed that near 40% of the isolated monoclonal antibodies (mAbs) bind to quaternary structures [83]. To conclude, we see that the DENV E domains I/II are more immunodominant than the EDIII in terms of induction of antibodies in humans. Importantly, both EDI/II and EDIII antibodies were found to possess a similar neutralization potency [86], and the most neutralizing antibodies against flaviviruses appear to target quaternary structures [78,80,83,86], These findings argue for preservation of quaternary structures in DENV vaccines. T Cells The role of T cells in immunity against dengue infection has been extensively reviewed by others [52,87], and we will briefly discuss recent findings regarding the role of T cells in immunity and pathogenesis. Whereas the CD4 + T cell response contributes to protection by instructing B cell responses against the virus [21], the importance of cytotoxic (CD8 + ) T cells for protection is still under debate since low T cell responses are seen during acute stages of DENV infection [36]. After peak viremia, peaks in both T cell response and cytokines are seen [36,88], suggesting that cross-reactive CD8 + T cells contribute to pathogenesis rather than protection. Furthermore, during secondary infection, T cells (like B cells) suffer from original antigenic sin [22,36,89]. The cross-reactive T cells during acute secondary infection have an altered cytokine responses consisting of low interferon gamma (IFN-γ) and high tumor necrosis factor alpha (TNF-α) [88,90]. This profile has been associated with severe disease [52]. The phenomenon of original antigenic sin might be less persistent in T cells than in B cells [20], as a recent manuscript showed that multifunctional CD8 + T cells can be associated with protection against disease in a Sri Lankan population [22]. Clearly, in naïve individuals, the CYD-TDV vaccine does not induce CD8 + T cell responses to the NS proteins of DENV. The participants in the CYD trials, however, had high baseline immunity, implying that T cell responses were already present and potentially boosted by the vaccine [20,39,41]. Thus, we cannot conclude whether or not it is important to include T cell immunity for protection and if this should be induced by a vaccine. Yet, the trials had quite low efficacy results despite high antibody titers. Mouse models indicated that protection requires both B and T cells [91] and that CD8 + T cells significantly contribute to disease alleviation, even under conditions of ADE [92]. Thus, CD8 + T cells likely contribute to clearance of infection when antibodies have failed to prevent infection. Hence, T cells might be more important for DENV immunity than previously appraised. Assays for Vaccine Development Seroconversion upon vaccination is measured with various assays based on either quantification of DENV-binding antibodies (ELISA) or bioassays measuring neutralization of infection [93]. Currently, the WHO considers the plaque reduction neutralization test (PRNT), which is validated to industrial standards, as the gold standard for DENV [93]. In case of the latter, DENV is mixed with serially diluted sera and added to a monolayer of cells. After incubation, an overlay is placed on top of the cells and plaques develop over time. The neutralization potency of the sera is defined as the dilution that neutralized 50% or 90% of the added virions. For JEV, the correlate of protection is 50% neutralization at a dilution of 1:10 or lower (PRNT 50 titer of 10), and similar correlates of protection have been defined for TBEV and YFV [94]. For DENV, the exact cutoff is unknown but was expected to be similar to the viruses mentioned above. Based on these criteria, the CYD-TDV trials showed good seroconversion rates, yet for DENV2 a particularly low clinical efficacy was seen (Table 2). This shows that the PRNT assay or its interpretation requires further fine tuning in order to find the true correlate of protection. Many parameters can be adjusted [95][96][97], such as (I) the cell line, (II) the challenge virus strain, and (III) the defined cutoff for seropositivity. Other parameters include incubation temperature [98,99] and virus source [83]. The current PRNT assay employs the Vero cells, an Fc-receptor (FcR)-negative cell line. FcR-negative cells are inclined toward neutralization, as virus-antibody complexes are only internalized via interaction with FcR. Conversely, FcR-positive cells typically show ADE with poor neutralization [50]. Primary myeloid cells are a natural host cell of DENV and support infection in the absence and presence of antibodies, and they could be an alternative to cell lines [100]. As a start, it would be interesting to investigate if neutralization assays performed with PBMCs of vaccinees gives a better correlate of protection than that of Vero cells. It is unlikely that primary cells will be applied in an industrial setting; yet, the above studies will guide future assay development. Second, distinct DENV genotypes can give significant shifts in the reported seropositivity, giving e.g. 50% reduction [72]. This is not surprising given the 9% variation within a serotype (Fig 1). More robust correlates of protection against a serotype could be found by including multiple genotypes reflecting the breadth within the serotype. Third, the threshold chosen for seropositivity was a PRNT 50 of 10. Yet, the threshold of 50% reduction may not be optimal in terms of variability [97], and different thresholds may be needed according to the serotype [101]. Indeed, in case of the JEV vaccines, the PRNT 50 values were found to differ between the existing genotypes [102]. The DENV vaccine cohorts now provide excellent opportunities to conduct mathematical studies to find better correlates of protection using more stringent criteria for the neutralization threshold and/or serum dilution. Overall, there is a poor correlation between the current cutoff for seropositivity (PRNT 50 10) and clinical efficacy of a DENV vaccine [25,103]. Since Sanofi will continue to monitor the vaccine participants for the next 4 years [19,27,28], the present vaccine trials now offer new prospects for studies to define the best assay and criteria that predict which vaccinees have developed protective immunity. Future studies will also benefit from the lesson of these trials, i.e., that too few participants were bled to allow for thorough correlative analysis between the antibody response and individual protection [28]. Challenges for Future Dengue Vaccines In this review, we briefly summarized the outcome of the CYD-TDV vaccine trials. The trials showed us that seroconversion of vaccinees does not necessarily correlate to clinical efficacy against symptomatic disease. This stressed how little we actually know about the human adaptive immune responses towards DENV infection. Most attention had been paid to the human antibody response, and the components thereof have been reviewed above (Table 2 and S1 Table). Based on the Sanofi trials and the reports on the human antibody response, some challenging questions are discussed below. Better Responses after Flavivirus Priming? The CYD-TDV trials reported higher antibody titers in individuals who were flavivirus-positive at baseline than in naïve individuals [20,26,104]. Also, priming apparently gives higher chance on tetravalency [20,26] and better efficacy [27,28]. The better efficacy results in primed individuals suggests that the immune response is different in naïve and primed individuals. In naive individuals, only the DENV antibody response is triggered by CYD-TDV, while in primed individuals, B and T cell responses are boosted, the latter likely through flavivirusbroad conserved epitopes. Yet, the lower antibody levels in flavivirus-naïve individuals could not be compensated for by repeated vaccination [26]. This raises the question of whether the vaccine preferentially expands pre-existing (cross-reactive) immunity and weakly induces de novo immunity. If so, the vaccine may be less beneficial for young children in endemic countries and travelers. Absolute Requirement for Tetravalency? The current dogma is that vaccination should induce serotype-specific antibodies against all four DENV serotypes. Pierson and colleagues suggested that all antibodies that can bind and neutralize DENV can also promote enhancement of infection, irrespective of the epitope [105]. If all antibodies support ADE and neutralization, high titers of cross-reactive antibodies may be sufficient for protection. Yet, a recent study showed that inapparent and apparent dengue cases have similar DENV-immunoglobulin G (IgG) titers but can be distinguished based on whether the sera shows heterotypic neutralizing capacity or not [106]. Future studies should address whether protection of infection depends on the balance of monotypic antibodies and heterotypic antibodies and/or the cumulative titer of all DENV antibodies. Why Low Efficacy towards DENV2? The CYD-TDV showed excellent seroconversion but did not result in high efficacy against symptomatic DENV2. The lack of CD8 + T cell responses has been suggested as an option [22]. Recently, there is also growing awareness about the role of the genotype used within the vaccine. Various genotypes of the same serotype can co-currently circulate within endemic areas [107,108]. A mismatch in the genotypes can significantly reduce the affinity of the sera to neutralize infection [72] or may even lead to ADE [7,8]. The low efficacy against DENV2 in the Thai Phase IIb trial was suggested to have occurred because of a mismatch in the vaccine genotype and the circulating genotype [25,109]. If mismatches are indeed important, close surveillance and prediction of the circulating genotypes is crucial. Annual reformulation may be beneficial for protection. Vaccine Formulation The formulation and administration regime of the ideal vaccine is a challenging topic. Subunit vaccines with monomer proteins are safe and can be easily reformulated. However, subunit vaccines also induce antibodies against epitopes that are inaccessible on virus particles due to protein-protein interactions [110] and lack quaternary structures, which are currently the most potent epitopes for neutralization [58,69]. Induction of antibodies against quaternary structures could be facilitated by using whole inactivated viruses, attenuated virus strains, or chimeric viruses. These three options have their pros and cons. Inactivated vaccines are noninfectious and may induce lower titers of neutralizing antibody compared with vaccines or infection [66,78], likely since different gene expression patterns are induced [23,111]. Lastly, attenuated virus strains mimic the actual pathogen as closely as possible, have the desired quaternary structures, and can induce high antibody titers. Yet, the chimeric vaccine lacks DENV-specific CD8 + T cell responses. Moreover, attenuated vaccines can mutate after administration and potentially become virulent, causing health risks, e.g., as seen in polio virus vaccines [112,113]. So far, the results of the Sanofi trials show that the attenuated CYD vaccine is very safe, with no evidence of ADE. Follow-up monitoring of these and future cohorts is important to show that the vaccine is safe over prolonged time periods [19]. The paradox of a DENV vaccine is thus that a vaccine should be sufficiently virulent to induce high antibody titers yet still be attenuated to be safe. In summary, the recent Phase III trials showed safety and excellent seroconversion [24], although seroconversion did not necessarily imply good efficacy, as shown by DENV2. A major challenge for the future would be to define what assay and criteria predict successful immunization and clinical efficacy. Still, the CYD-TDV offers promise to prevent hospitalization and severe dengue hemorrhagic fever, which is encouraging news. These CYD-TDV trials offer plenty of clues to gain more knowledge about the human response against DENV, the cross-reactivity with and potential cross-protection against flaviviruses, and the interpretation of antibody-based neutralization assays. Knowledge on this will aid future vaccine development against other viruses and pathogens than DENV. Key Learning Points • Vaccines should preferably induce antibodies against quaternary structures. • Distinct antibody repertoires are seen for primary and secondary infections. • The CYD-TDV trials offer possibilities for retrospective analysis to identify correlates of protection. • To find correlates of protection, further validation and standardization of neutralization assays is required. • T cells could be more important in DENV immunity than previously appreciated. In these reports, the efficacies of the CYD-TDV vaccines are reported for the first time, based on large cohorts in Asia and Latin America. Although the efficacy against DENV2 is quite enigmatic, the overall efficacy against severe disease and hospitalization offers perspective. Here, the authors show that potently neutralizing antibodies appear to be directed towards quaternary structures, thus providing insight on the requirements of a dengue vaccine. This paper shows the importance of T cells in immunity against dengue virus infections, clearly advocating against a focus on antibodies alone. The translation from in vitro plaque reduction neutralization assays to in vivo protection has been seriously hampered by the lack of uniformity in the assays and controls. With this paper, the authors are providing insight on the variance of the assays and definitions of neutralization. Moreover, clear solutions are suggested for the standardization thereof. Top Papers in the Field Supporting Information S1 Dataset. E amino acid sequences used in the review. The information is given as follows: country of isolation_strain_year of isolation (if known). (DOCX) S1 Table. An overview of the dengue antibody response in human sera. In this table, the focus is on the development after primary (1st) and secondary (2nd) infection, with the stage of disease at the moment of serum sampling being convalescent (conv.) or unknown. If unknown, only the stage is presented. We grouped the results of primary and secondary infections for individual reports in order to visualize the effects of secondary infection on the antigens targeted and the relative magnitude of antibodies against the epitopes. m.p.i.: months post infection. n.d.: Not determined. (DOCX) S2 Table. An overview of human monoclonal antibodies derived from immortalized B cells. An overview of human B cell-derived monoclonal antibodies from dengue-infected humans whose PBMCs were taken after primary (1st) or secondary (2nd) infection. The stage of disease was either acute (ac) or convalescent (conv.). Note to table: in reports in which multiple donors had been used, all percentages are first calculated as % per donor and then averaged over all donors. Hence, some percentages can differ from reports in which the value is reported as % of the whole experiment. n.d.: not determined. EDI/DII and DIII refer to the structural domains within the E ectodomain. Reports were selected based on whether they (I) were the first to describe the monoclonal antibodies, (II) screened against several epitopes, and (III) used an unbiased approach to generate the monoclonals. (DOCX)
6,836.2
2015-06-01T00:00:00.000
[ "Biology", "Medicine" ]
COMPASS: Directing Named Data Transmission in VANETs by Dynamic Directional Interfaces Inefficient data transmission has been a development bottleneck of Vehicular Ad-hoc NETworks (VANETs), especially in urban areas. It has been proved that many complex IP-based solutions are difficult to be applied in the highly dynamic and link-interrupted vehicular environment. In recent years, Named Data Networking (NDN) has become the most popular realization of Information-Centric Networking (ICN) for future networks. Its characteristics of multi-source, multi-path and in-network caching are helpful for improving the data transmission in VANETs. However, the bottom layer of vehicles cannot provide multiple interfaces to different domains like the routers in wired networks. Thus interface-based forwarding degenerates into directionless broadcasting with low performance and high overhead. Against this problem, we propose COMPASS, a novel named data transmission protocol for VANETs. Firstly, a dynamic directional interface model is built as the cornerstone of our COMPASS. Secondly, the forwarding strategies are improved for rapid interest dissemination and named data retrieving. Besides, an interface remapping method and the update strategies of Forwarding Information Base (FIB) and Pending Interest Table (PIT) are designed to enhance the robustness in high-mobility environment. Finally, the performance of COMPASS is verified on ndnSIM. Compared with the other three state-of-the-art protocols, COMPASS obtains the highest interest satisfaction ratio and the shortest transmission delay in urban traffic scenarios with restricted communication and storage overhead. I. INTRODUCTION In recent decades, VANETs have been a research hotspot concerned by both academia and industry. With the distribution of Dedicated Short-Range Communications (DSRC) and the introduction of IEEE 802.11p (WAVE) [1], the safety applications such as collision avoidance and accident warning have been implemented by inter-vehicle broadcasting. However, as the upgrade of on-board applications, broadcasting is not a viable solution to provide high-quality data transmission for sharing traffic conditions, enhancing driving experiences and enriching on-line entertainments [2]. For VANETs, it is a great challenge to achieve efficient and reliable data The associate editor coordinating the review of this manuscript and approving it for publication was Omer Chughtai. transmission in the poor environment with highly dynamic topology and intermittent links. Given the success of internet, a lot of research focuses on building an IP-based networking on the physical and link layers of WAVE so as to achieve addressing and routing among vehicles. Due to the host-centric nature of IP, many problems arise when it is implemented in VANETs such as address management, route discovery and session maintenance. Although some studies [3] attempt to fix IP by leveraging complex solutions, it is still difficult to achieve high performance in the vehicular environment. Therefore, it is more important for vehicles to retrieve data than maintain a multi-hop path to the data source. Aiming at the problems of IP-based networks, ICN [4] is proposed as a new paradigm for future networks. Among its realization schemes, NDN [5] is the most popular one that identifies and transmits data by a pre-designed hierarchical name. NDN provides a multi-source, multi-path and in-network caching way for data retrieving which is ideal for handling the transport challenges of VANETs [6]. Some research works [6]- [8] have attempted to deploy NDN on VANETs and proposed the concept of Vehicular Named Data Networking (VNDN) [9]. Since NDN is originally designed for wired networks, it does not take into account the different access technologies of wireless networks. Consequently, the vehicle nodes cannot control the named data transmission by leveraging the underlying interfaces. As shown in Figure 1, the NDN routers deployed in wired networks manage two data structures: FIB and PIT. Each FIB or PIT entry is bound to one or more interfaces. Before sending interest or named data, the routers query their FIB or PIT to determine the outgoing interfaces. In this way, the whole network is divided into several domains by each router, and named data is transmitted across domains through different interfaces. In VANETs, the vehicles are usually equipped with the same wireless communication devices. They cannot divide the network space by using only one interface. To achieve named data transmission, the vehicles have to broadcast at each hop [10]. To create extra interfaces for data transmission, some studies [11] suggest that the vehicles adopt multiple wireless communication technologies in bottom layer such as WAVE, 4G\LTE and WiMax. This approach increases the cost of car production but still cannot provide a clear network division due to the overlaps between coverages of different communication technologies. Due to the lack of interface support from underlying layer, the vehicles' FIB and PIT are ineffective during named data transmission. If all the nodes disseminate interest and retrieve named data by unrestricted broadcast, it is likely to cause broadcast storm by flooding. So far, some methods [12]- [14] have been proposed to alleviate this problem such as adopting a delay-based distributed broadcast algorithm and selecting the optimal neighbor as the relay at next hop according to one or multiple attributes [15]- [18]. Although these efforts can mitigate broadcast storm by reducing network redundancy, they cannot achieve efficient named data transmission. Besides, other studies [19]- [21] attempt to direct interests to the potential providers or their surrounding areas. However, the idea that the consumers obtain the details of providers before sending interest is inconsistent with NDN's contentcentric nature and also unfeasible in actual traffic scenarios. In summary, efficient data transmission in VNDN requires accurate control of interest and named data forwarding at each hop. It is critical to provide a valid solution for vehicles, which can successfully associate the virtual named data space with the actual network space by interfaces. In this paper, we propose a novel named data transmission protocol for VNDN. Just as a compass helps the traveler make navigation, it controls the interest and named data forwardings at each hop by dynamic directional interfaces. That's why we name it as COMPASS. Compared with the previous works, COMPASS can achieve higher interest satisfaction ratio and shorter transmission delay with restricted communication and storage overhead. Its novelty can be summarized into the following three points. • To provide effective interfaces for vehicles, we discuss the challenge of building interfaces in VNDN, analyze the defects of static division strategy when it is applied to urban traffic scenarios, and propose a dynamic directional interface model (DDIM) which can flexibly make division according to the driving direction and provide directional interfaces with high robustness. • Based on DDIM, COMPASS is proposed as a novel transmission protocol for VNDN. In COMPASS, both the data structures and forwarding strategies are redesigned to achieve efficient interest dissemination and named data retrieving. Besides, an improved delaybased distributed broadcast algorithm which allows 0-delay forwarding at high-priority nodes is designed to speed up the entire transmission process. • To deal with the high mobility of vehicular environment, an interface remapping method is put forward to maintain the validity of FIB and PIT entries after the driving directions of vehicles have changed. In addition, the update strategies of FIB and PIT are also improved so as to enhance the adaptability of vehicles in the highmobility traffic scenarios. The rest of this paper is organized as follows. In section II, related work on named data transmission in VANETs and discussions about our motivation are presented. Through comprehensive analysis and comparison, our DDIM is proposed in section III as the foundation of COMPASS. Then, the data structures and forwarding strategies are redesigned in section IV. To support high mobility, an interface remapping method as well as the update strategies of FIB and PIT are introduced in section V. In section VI, our COMPASS is implemented on ndnSIM and compared with three state-ofthe-art protocols. Finally, the whole research work is concluded in section VII. II. BACKGROUND AND MOTIVATION The inability of underlying layer to provide valid interfaces is a great challenge in deploying the original NDN directly on vehicular networks. In real traffic scenarios, since vehicles are uniformly equipped with one type of wireless communication device, both interest and named data are sent or forwarded VOLUME 8, 2020 by broadcast. The transmission progress is achieved by unrestricted flooding, which is easy to cause broadcast storm. According to the pull-based mode of NDN, the named data is retrieved along the reverse path of interest. Hence, how to effectively control the interest propagation is a key issue in designing transmission protocol for VNDN. In recent years, some studies have focused on achieving efficient named data transmission by directing interest propagation. According to their forwarding principles, they can be divided into three categories: blind forwarding, neighbor-aware forwarding and destination-aware forwarding. A. BLIND FORWARDING Blind forwarding means that vehicles have no information before sending or forwarding interest. To avoid conflicts, the delay-based distributed broadcast is a commonly adopted interest suppression technology. In [12], a uniform random value from the range of 0 to 2 milliseconds is set to the collision-avoidance timer at every node. In this way, even when the neighboring nodes receive an interest packet at the same time, they are likely to schedule the forwarding operations at different moments. A similar approach with a counter-based suppression technique is proposed in [13], where different defer timers are set for interest and data forwarding to prioritize data over interests. During the defer time a node overhears the channel. If the same interests or data are broadcast by other nodes for a number of times, the node will abort its suspended forwarding. However, the random approach is not an optimal solution for distributed broadcast. In consideration of the predicted link stability between neighboring vehicles, a defer timer function is designed in LISIC [14] to determine the forwarding priority of all the neighbors. Upon an interest reception, a vehicle first estimates its link stability with the sender or relay at last hop by their locations, velocities and directions, then sets a defer timer for current interest based on the predicted connectivity duration. A similar approach is also followed in MobiNDN [22]. During the interest dissemination, the calculation of defer time is coupled with the concept of sweet spot. The vehicle at a favorable position has a high priority of interest forwarding by setting a low delay value. Although effectively alleviating conflicts among neighboring nodes, the delay-based distributed broadcast introduces an additional delay in the transmission process and cannot guarantee the coverage of every potential provider. Different from the above studies, interest flooding is allowed in CODIE [23] to speed up the searching for data sources, but a hop limit is carefully set for the progress of named data retrieving. In CODIE, the redundant copies of named data are discarded at the relays according to their hop limits. In this way, it alleviates the packet conflicts and reduces the transmission overhead. LAPEL [24] is an improved version of CODIE. Besides hop limit, an adaptive algorithm is designed to calculate the lifetime of PIT entries. For an interest, the relevant PIT entries at intermediate nodes along a transmission path are assigned with different lifetimes. Since the intermediate nodes can detect packet loss and trigger a retransmission whenever its lifetime expires, the whole transmission process can be accelerated with less overhead than that in CODIE. B. NEIGHBOR-AWARE FORWARDING The idea of neighbor-aware forwarding is that vehicles can obtain the status information of neighbors by exchanging messages and select one or more candidates from the neighbor list as the relays at next hop when forwarding an interest. In [25], the area around the interest sender (or forwarder) is divided into 4 quadrants. The farthest neighbor in each quadrant is selected as the relay node. To this purpose, two additional packets (ACK, CMD) are exchanged between the sender and its neighbors. In order to reduce the bandwidth usage induced by interest flooding, the NAIF [15] scheme decides the eligibility of a relay node based on its data retrieval rate for a given name prefix and its distance to the consumer. Similarly, the motion parameters of vehicles and link quality metrics are introduced in MUPF [17] which helps the interest relays in multiple unicast paths to select stable and reliable neighbors as next hops. To mitigate the interest broadcast storm, a RobUst Forwarder Selection (RUFS) is proposed in [16]. In RUFS, each vehicle shares its satisfied interest(s) statistics with neighbors. All neighbors store this information in their neighbors satisfied lists and select the optimal neighbor as the interest forwarder by leveraging multi-criteria decision. For the same purpose as RUFS, a Distributed Interest Forwarder Selection (DIFS) scheme is proposed in [26]. Different from RUFS, the forwarding decision is made at the interest receiver. When the node at last hop sends an interest, its status information is piggybacked in the packet. After receiving this interest, the immediate neighbors rank themselves to be an eligible interest forwarder by using multiple attributes. C. DESTINATION-AWARE FORWARDING The studies on destination-aware forwarding can be divided into two categories. One is that the consumer has mastered the information of content providers. The other is that the consumer is not aware of any reachable provider but knows the area where the potential data source locates. According to NDN, a large content should be split into multiple chunks for delivery. Thus, the consumer needs to continuously send a number of interests for those chunks. After successfully retrieving the first chunk, the consumer obtains the information of one or more providers, such as identifier, position, and expected hops. Therefore, the idea of specifying the preferred provider in the subsequent interests is proposed in E-CHANET [19] to deal with broadcast storm and reduce communication overhead. This idea is followed by [27] in which a location-based and information-centric (LoICen) architecture is built to improve the content procedure and mitigate the broadcast storm in VNDN. In LoICen, vehicles opportunistically obtain the location information of the potential providers that might have desired content in their cache. This location information is used whenever possible to improve content search by directing interest packets to the area where the content may be located. To suppress interest flooding, a content-centric framework named GeoZone is proposed in [21]. By leveraging a geo-referenced naming scheme, GeoZone builds a dissemination zone with the GPS coordinates of consumer and content provider. It is an important assumption for the above works that the consumer should obtain the information of provider before sending interest. Although large content transmission is a suitable application scenario, the provider is unknown in most of applications. Additionally, it is challenging for provider maintenance in a highly dynamic environment. In [28], a naming scheme is designed to incorporate geolocation into names so that the problem of forwarding interest to a specified provider is transformed to directing it to the area where the probability of meeting the required data is maximized. Although the substitution of data spot for provider facilitates the interest propagation, it is only feasible for the applications coupled with geolocations. Aiming at steering interest towards where data resides, Navigo [20] maps data name to divided areas by introducing multiple geographic faces in the 2.5 layer and forwards interest along the shortest path. This is a complete solution for deploying NDN framework on vehicular networks, but it is not enough to locate the data source only by the self-learning of vehicles. D. MOTIVATION For the target of efficient named data transmission, it is necessary to direct interest packets to the potential content providers or their locations. However, the data-centric nature of VNDN makes it unreasonable for consumers to obtain the accurate information of providers before sending interests. Therefore, it is a great challenge for VNDN to resolve the contradiction between its data-centric nature and high transmission requirements. Since the studies on blind forwarding and neighbor-aware forwarding attempt to search for the content across the entire network, they have to adopt the interest suppression technologies and choose the stable neighbors as next-hop relays for broadcast storm mitigation. The research works on destination-aware forwarding try to point the way for interest dissemination, but how to locate the data source is a knotty problem. Different from the previous work, a novel protocol named COMPASS is proposed for named data transmission in vehicular networks. In COMPASS, we build a dynamic directional interface model (DDIM) in NDN layer by analyzing the movement patterns of vehicles in urban traffic scenarios, design a novel distributed broadcast algorithm for interest dissemination and data retrieving on basis of DDIM, and finally improve the solutions of interface remapping and data structure management for high mobility support. III. BUILDING INTERFACES FOR VEHICLES For efficient data transmission in VNDN, how to provide effective interfaces for moving vehicles is the primary problem to be solved. In this section, the challenge of building interfaces in urban traffic environment is first summarized, then the static division strategy is deeply studied and proved not applicable to VNDN from both theoretical analysis and data statistics, and finally our Dynamic Directional Interface Model (DDIM) is introduced with its inherent advantages. A. CHALLENGE OF BUILDING INTERFACES Although there has been a number of communication technologies and protocols designed for VANETs in physical and link layers, it is unfeasible to forward data among those different interfaces due to incorrect domain division and high equipment cost. Therefore, we make a basic assumption that all the vehicles and infrastructure are equipped with one type of wireless communication device. On basis of it, we try to build multiple virtual interfaces in NDN layer instead of the hardware interfaces in underlying layers. According to the research on Internet of Vehicles (IoV), it is common and effective to use the GPS information for data transmission improvement. Some previous studies have achieved directional flooding [29] and routing [30]- [32] by dividing the area around vehicle into several sections or quadrants. Inspired by those efforts, we propose the idea of building directional interfaces for every node in VNDN. By leveraging the geographic information provided by an onboard GPS device, a vehicle can divide its wireless coverage area into multiple directional interfaces. Every sector covered by the directional interface contains a number of neighbors. Through these interfaces, the vehicle can quickly classify its neighbors and easily specify its broadcast area. Since both interest and named data are forwarded according to the directional interfaces recorded in the FIB and PIT entries, maintaining the mapping relationships between neighbors and directional interfaces as long as possible is the key for vehicles to achieve efficient named data transmission in highmobility and link-intermittent vehicular environments. In urban traffic scenarios, the mapping between neighbor and interface is easily broken due to many reasons which can be grouped into three categories. (1) The neighboring vehicle changes its motion state. For example, a neighbor makes a sudden turn at an intersection and quickly moves out of the directional interface which it belongs to before. (2) The local vehicle changes its motion state. If a vehicle changes the driving direction dramatically, it will leave most of its neighbors. As a result, a large number of mappings become invalid. (3) The direction of current road changes. Due to the close angles between road direction and interface boundary, some neighboring vehicles may jump from one interface to another frequently during the driving process. As shown in Figure 2, car A and B are moving along a road. At first, B is located in the westbound interface of A. When the road turns north, although the relative position of two cars remains unchanged, B is relocated in the northbound interface of A. Additionally, in some complex scenarios, the mapping may be broken due to one reason or the combination of several ones mentioned above. In the scenario of Figure 3, two VOLUME 8, 2020 cars approach the intersection from the east, and B belongs to the westbound interface of A. For each car, there are four choices at the intersection: going straight, turning left, turning right and turning around. Thus, the mapping between B and A's westbound interface may fail due to A's driving direction change, B's driving direction change or both of them. Based on the above analysis, how to improve the robustness to cope with complex urban traffic environment is a great challenge to build directional interfaces for moving vehicles. For designing an interface division scheme, three issues are affecting its robustness: (1) How to select the reference for interface division? (2) How to set the number of interfaces for vehicles? (3) How to provide the adaptability to the changes of scenarios? B. STATIC INTERFACE DIVISION STRATEGY Some previous works [29]- [32] on directional data transmission adopt the fixed interfaces provided by the static division strategy. In their designs, the map direction is set as a reference. Although the reference direction and the interface number vary among different schemes, all the interfaces are fixed after division. In Figures 2 and 3, car A adopts a classic Static Directional Interface Model (SDIM) in which four fixed interfaces are built respectively according to the geographic directions of north, west, south and east. The static interface division strategy is easy to implement on vehicles, but it has the following two defects when applied in an urban traffic environment. First, a fixed interface may fail due to the change of road direction. In the example of Figure 2, car B will no longer belong to the westbound interface of A once the direction of road turns to northwest. To verify that this failure is common in urban traffic scenarios, we analyze the applicability of classic SDIM in three typical cities. In Figure 4, the road networks of three cities are drawn by OSMnx [33]. Beijing is a long-established imperial city where the roads are planned to follow the north-south and east-west directions. Thus, it provides a nearly perfect environment where the vehicles' fixed interfaces rarely fail due to the change of road direction. Wuhan is an inland shipping hub in central China. Its road planning refers to the flow direction of Yangtze River. Since there are a large number of roads whose direction angles are close to those of SDIM's interface boundaries, the failure of fixed interface will occur frequently. San Diego is an important port on the west coast of the United States. As shown in Figure 4c, there are also some roads along the coastline or connecting inland that will cause the fixed interfaces of SDIM to fail. For further verification, we analyze the taxi GPS trajectory datasets from those three cities. The properties of datasets are detailed in Table 1. To obtain accurate results, we first preprocess the raw data by cleaning the dirty records caused by GPS coordinate drift, then find out the trips of each vehicle and divide them into cruises, finally make statistics based on the GPS records of vehicles in cruise state. Figure 5 shows the driving direction distributions of vehicles in the form of radar chart. In Beijing, vehicles head four directions of north, west, south and east in most cases, which is consistent with its road planning. In the other two cities, vehicles often travel along the roads whose direction angles are close to those of interface boundaries. As a result, the fixed interfaces will fail frequently in Wuhan and San Diego. The road planning determines the driving direction of vehicles running in cities. Although we can try to select the optimal reference direction and interface number for static division schemes, it is difficult to avoid the close angles between some roads and interface boundaries. Therefore, the failure of fixed interface caused by road direction change is an inherent defect of static division strategy. Second, either the local vehicle or its neighbor changes motion state, it is difficult for static division schemes to maintain the validity of relevant mapping. In the example of Figure 3, both cars A and B have four choices at the intersection. To evaluate the adaptability of classic SDIM in this typical scenario, we illustrate 16 possible cases in Table 2. The horizontal headers list A's four choices, while the vertical headers show B's. As marked by the cross, the mapping between B and A's westbound interface will fail in most cases. B will stay in the original interface of A only if both of them cross the intersection and move to the west. Therefore, the static division strategy cannot guarantee the robustness of fixed interface in the scenario of an intersection. To prove that the scenario in Figure 3 is common in urban areas, we calculate all the turning angles of intersections in those three cities by leveraging google map and plot their distributions in Figure 6. The X -axis lists the serial number i of angle interval, and the Y -axis represents the ratio of samples belonging to the specified interval. In the statistics, the angle range of [0, π] is divided into 13 intervals. In Figure 6, the serial number i is mapped to the interval of [(i + 2)π/18 − π/36, (i + 2)π/18 + π/36) (i = 2, . . . , 12) except for the first and last cases. In particular, No. 1 corresponds to the interval of [0, 7π/36), and No. 13 corresponds to the interval of [29π/36, π). As shown in Figure 6, the turning angles of three cities are distributed normally with a common mean of 90 degrees. Although the sample ratios are different, all the values of No. 7 interval are above 65%. It means that the standard intersection scenario is common in urban areas. In conclusion, the fixed interface provided by the static division strategy will frequently fail in urban traffic environment due to the changes of road direction and vehicles' motion state. Thus, it is not an optimal candidate to provide effective support for named data transmission in VNDN. C. DYNAMIC DIRECTIONAL INTERFACE MODEL Aiming at the defects of static division strategy, we propose a novel Dynamic Directional Interface Model (DDIM) based on the movement patterns of vehicles in urban areas. As shown in Figure 7, we set the vehicle's driving direction as a reference for interface division. In DDIM, the vehiclecentric area is divided into four parts. Each part is mapped to one directional interface. In Figure 7 Compared with the static division schemes, DDIM makes the following three improvements. First, the driving direction of local vehicle is set as a reference for interface division instead of the map direction. Although vehicles need to make division multiple times during driving process, DDIM can overcome the interface failure caused by the change of road direction. In the scenario described in Figure 2, no matter how the road direction changes, car B will always stay in A's F-INF by leveraging DDIM. Second, DDIM is more suitable for the intersection scenario than the static division schemes. Since DDIM provides four 90-degree interfaces, remapping operation can be adopted by vehicles to adjust the relationship between the neighbors and their directional interfaces after they have changed driving directions. Thus, an interface remapping method is designed to improve the performance of DDIM in urban traffic environment. More details will be introduced in subsection V-A. To verify the advantages of DDIM, we also evaluate its adaptability in the typical scenario of Figure 3. According to the 16 cases listed in Table 3, we found that if A and B make the same decision at an intersection, B will still stay in the F-INF of A. Besides, as long as B keeps moving in initial direction, even if A changes its driving direction, it will be able to relocate the interface which B belongs to. Thus, three cells in the first row are marked by the triangle symbol which means that the original relationship can be persisted through interface remapping. Finally, the directional interface provided by DDIM has a considerable lifetime during the trip of vehicle. We calculate the average cruise durations of three cities and their average proportion in trip time. As shown in Figure 8, all the average durations of cities are longer than 1 minute, and the value of Beijing even reaches 2 minutes. From the perspective of whole trip, all the average proportions of cruise durations in three cities are above 50%. Therefore, even without the remapping operation, DDIM can guarantee that VOLUME 8, 2020 Based on the above analysis, we adopt DDIM as the cornerstone of entire approach. In urban traffic scenarios, all the vehicles should build their DDIM by leveraging GPS service. Regardless of the communication technologies adopted by underlying layers, a vehicle only needs to select the directional interface for named data to indicate the transmission range at next hop. IV. NAMED DATA TRANSMISSION BY DDIM According to the analysis in subsection III-C, DDIM provides the virtual directional interfaces in NDN layer instead of the hardware interfaces in underlying layers. Based on it, we design a novel named data transmission protocol for VNDN and name it as COMPASS. Figure 9 shows the protocol stack of COMPASS and the simplified process of named data transmission. DDIM is built in the NDN layer of every node. It is the cornerstone of our entire approach. Consistent with the original NDN, there are 3 roles set in COMPASS: consumer, intermediate node and provider. The consumer sends an interest to request data. The intermediate node who does not have the matching data may forward this interest after receiving it. If the interest arrives at a provider, a named data packet will be generated and returned along the reverse path. Forwarded by the intermediate nodes, the named data finally arrives at its consumer. By using the directional interfaces provided by DDIM, COMPASS optimizes the forwarding of interest and named data at each hop in order to (1) speed up the entire transmission process, (2) reduce the communication and caching overhead, and (3) improve the success rate of named data retrieval. In the following paragraphs, we first introduce our design of packet types and data structures, then elaborate a complete transmission progress including both interest forwarding and named data retrieving, and finally propose an improved delay-based distributed broadcasting algorithm. A. PACKETS AND DATA STRUCTURES Following the original NDN design, COMPASS uses only two types of packets, interest and data. As shown in Figure 10, both interest and data packets contain several new fields in addition to the basic settings. First, the V_ID is added for vehicle identification, and the POS and VEL are set to describe the motion state of local vehicle. Second, the 4-bit D_INTs is designed to specify the directional interface of named data forwarding. In Figure 10 every available directional interface at next hop. During the process of distributed multi-hop broadcast, the node which is selected as PR can obtain the highest priority of forwarding current interest. In COMPASS, all nodes still maintain the three basic data structures: CS, FIB and PIT. Since both the interest forwarding and named data retrieving are dependent on DDIM, we make several appropriate modifications to the FIB and PIT. The structure of FIB is shown in Figure 11. In every FIB entry, the Name and Timestamp are the reserved fields from the original NDN. The Name is used for identifying the data received before, while the Timestamp records the time when the current entry was created. Besides, the 4-bit D_INTs is also added in FIB entry to record the directional interfaces where the local node received the matching data. For every available interface labeled in D_INTs, a preferred relay (PR) is set to accelerate the interest forwarding. Since the vehicle may receive multiple copies of named data from different directions, one FIB entry could be marked by several directional interfaces as well as their PRs. In Figure 11, the Is_RMP is a boolean field which indicates the remapping operation of current entry. If the Is_RMP is true, it means the D_INTs field has been reset due to the change of driving direction. In section V-A, the details of interface remapping for the FIB and PIT management will be introduced. Similar to the FIB, PIT contains the fields of Name, Timestamp, IS_RMP and D_INTs. The Name describes the data requirement of interest, and the Timestamp records the creation time of PIT entry. The 4-bit D_INTs is used to mark the directional interfaces where the local vehicle received the copies of current interest. The IS_RMP also indicates whether those directional interfaces have been remapped due to the driving direction change. Besides, the PEL is a reserved field of original NDN which shows the lifetime of current PIT entry. To mark whether the local node has forwarded the interest, the IS_FWD is added in every PIT entry. Therefore, VOLUME 8, 2020 although a vehicle should create one PIT entry whenever receiving a new interest, it only sets the value of IS_FWD after the interest has been successfully forwarded. Finally, the FIB_E_ID is added to record the FIB entry number which specifies the reference used for current interest forwarding. In this way, the vehicle can update its FIB entry whenever receiving the matching data. On basis of the above design, we can select the directional interfaces for interest and named data packets by leveraging the field of D_INTs. Once receiving a packet, a vehicle should determine whether it belongs to the directional interface marked in D_INTs before any other operations. As shown in Figure 10, both the interest and named data packets contain the coordinates and speed vectors of their senders or forwarders at last hops. The vehicle can also obtain the sensing data about its motion state anytime. Assuming that all the data is represented according to the same geographical reference system, we propose an optimized method for interface determination by fast coordinate transformation. Take the vehicles in Figure 13 for example. When car B receives a packet from car A, it should figure out which directional interface of A it belongs to. In the coordinate system OXY, the position of A is (x a , y a ), while the position of B is (x b , y b ). As shown in Figure 13, the speed of A is represented by the vector v a , and the X -axis and Y -axis components of v a are marked as u a and w a respectively. Obviously, if B makes interface determination for the received packet in the coordinate system of OXY, it will cost a complicated calculation. For optimization, we build a new coordinate system O X Y whose origin is located at A's position (x a , y a ). Besides, the driving direction of A is rotated clockwise by π/4 to set the positive direction of X -axis. In this way, the four quadrants of O X Y are consistent with the directional interfaces of A's DDIM. Consequently, our problem is transformed to recalculate the coordinates of B in O X Y and figure out the corresponding quadrant where it locates. Thus, the decision process consists of two steps. First, we calculate the rotation angle from OXY to O X Y . In Figure 13, it is labeled α. According to formula 1, α is equal to θ minus π/4, where θ is the angle between A's driving direction and the positive direction of X -axis. Second, we obtain the transformed coordinates (x b , y b ) of B by formula 2 and determine A's directional interface for B according to Table 4. (1) B. NAMED DATA TRANSMISSION PROCESS In the following paragraphs, we will introduce the details of transmission process from two aspects of interest forwarding and named data retrieving. Figure 14 shows the interest processing flow at intermediate nodes. First of all, it needs to check its forwarding qualification according to DDIM. If the D_INTs field of interest is set, it extracts the motion state of the node at last hop, then determines whether it is within the specified interface by our method introduced in subsection IV-A. Therefore, only the intermediate node which belongs to the specified interface can serve as a relay for rebroadcasting the interest. Afterwards, the intermediate node continues to look up its PIT by name. If there is a matched entry, it indicates that the current node has received the interest before. Therefore, the intermediate node does not need to forward the interest again. In summary, only the intermediate node who is within the specified directional interface and has not received the same copy before can act as the relay of current interest. Second, the relay needs to select the directional interface for interest forwarding by looking up its FIB. Similar to the consumer, the relay selects the optimal entry according to the LPM principle and timestamp. On basis of it, the relay updates the interest by resetting the D_INTs and PR_List fields. It is worth noting that the length of matching prefix indicates the relevance between the interest and the FIB entry. If the matching length of selected entry is not long enough, it is unreliable for the relay to direct the interest by the specified directional interfaces. To guarantee the coverage of interest dissemination, we require that a FIB entry can be selected as a reference only when the length of its matching prefix is longer than half the length of interest name. Therefore, if the relay can find the FIB entry satisfying above conditions, it is allowed to forward the interest to the specified interfaces. Otherwise, it rebroadcasts the interest in all directions. Finally, every relay also needs to decide when to forward the interest. Aiming at the drawbacks of delay-based distributed broadcast, we put forward the idea of accelerating the interest dissemination by using the preferred relay (PR) of directional interface. On basis of DDIM, the vehicle is not only able to select the interfaces but also set the corresponding PRs before broadcasting interest. If the vehicle has successfully received named data from a neighbor before, it can set the neighbor as the PR of corresponding interface in FIB entry. Compared with other candidates, the PR is a verified relay which has provided high transmission performance. Thus, it should be given the highest priority of forwarding the current interest. According to the above analysis, we design an improved delay-based distributed broadcast method in which the historical transmission records in FIB are fully used to reduce the delay at every hop. Our method will be elaborated in subsection IV-C. After the multi-hop broadcast, the interest finally arrives at the provider. The Nonce field is first read by the provider to determine whether it has received the interest before. If it has gotten and responded to the same request, this copy will be dropped. Otherwise, it will continue to find the matching data from CS, generate a named data packet, and send it to the incoming interface of interest. 2) NAMED DATA RETRIEVING The retrieval of named data still leverages multi-hop broadcast. Once receiving a named data packet, the intermediate node needs to consider three problems. (1) Whether it is qualified to forward this named data packet? (2) Which directional interface does it select? (3) When does it forward the named data packet? As shown in Figure 15, the intermediate node not only calculates whether it is within the specified interface but also looks up whether there is a matching entry in PIT. If both conditions are satisfied, it has the qualification to forward current named data packet. Then, the intermediate node sets the incoming interfaces of interest as the outgoing interfaces of named data packet according to the D_INTs of matching PIT entry. Finally, it forwards the named data by our improved delay-based distributed broadcast method. Although the above process is similar to interest forwarding, there are still two differences. First, the intermediate node selects the interfaces by looking up PIT instead of FIB. Second, since the incoming interfaces of interest are recorded in PIT entry without the relays at last hop, the PR_List field cannot be set in the named data packet. Instead, the IS_FWD is added as a new field of PIT entry to indicate whether the local node has actually forwarded the corresponding interest. According to the distributed broadcast method, the relay which has forwarded the interest is more eager to get the named data than the other intermediate nodes which have just created a PIT entry after receiving the interest. Therefore, the relay actually rebroadcasting the interest should have the highest priority of forwarding the matching data. In our improved delay-based distributed broadcast method, if the intermediate node finds the IS_FWD of matching PIT entry has been set, it will forward the named data immediately. Otherwise, it will wait for a while and then continue to make the forwarding decision. C. DELAY-BASED DISTRIBUTED BROADCASTING In COMPASS, both the interest and named data are forwarded according to our improved delay-based distributed broadcast algorithm. Compared with previous work, our algorithm not only follows the strategy of 'listen before talk' (LBT) to reduce the redundant copies but also adopts the 0-delay forwarding at the high-priority nodes to accelerate the transmission process. In our broadcast algorithm, a function T is designed to calculate the delay t at the intermediate nodes before forwarding an interest or named data packet. As mentioned above, those intermediate nodes are divided into two categories as per their VOLUME 8, 2020 priorities. Since the high-priority nodes make forwarding immediately after receiving the qualified packets, the value of t is set to 0. For the low-priority nodes, they need to compete for the forwarding opportunity by setting different delays. Therefore, the value of t ranges from 0 to the maximum delay t max . Following the design idea of [19] and [24], we choose the Short Interframe Space (SIFS) of MAC layer protocol as the basic unit of delay measurement, and set the value of t max to 64 times SIFS. In function T , t is determined by two factors, d and t. d is the distance from the local node to the relay at last hop, and t indicates the duration while the value of d stays within the 1-hop communication range R. As d and t increase, the value of t should decrease gradually, and its decline rate should also slow down. In summary, the function T should satisfy the following four properties. To meet the above properties, we design a 3-ary piecewise function T (h, d, t) for the delay calculation of interest and named data at intermediate nodes. In formula (4), as shown at the bottom of this page, h is a boolean variable. If the node has been selected as the PR of directional interface according to the field of interest or has forwarded the interest matching the current named data, h should be set true to indicate the highpriority node. In above two cases, the value of t is equal to 0. For the nodes with low priorities, T is defined as a rounded exponential function only related to d and t. In practice, d can be calculated on basis of vehicles' coordinates, while t can be calculated by formula (5), as shown at the bottom of this page. In (5), x and y respectively represent the distances on the X and Y axes, while V x and V y indicate the speed differences in the X -axis and Y -axis directions. The in (4) represents the floor function in mathematics. Thus, the value of t is equal to k × SIFS when h is false, and k is an integer between 1 and 64. Besides, both m and n are the parameters used to adjust the numerical distribution of t. According to our algorithm, if the local vehicle does not obtain the highest priority, the value of t should be not less than 1 SIFS. Considering that the effects of d and t on t are equivalent, we set the lower bounds of 32 · e − d m and 32 · e − t n to 0.5 in our simulations. Since delayed broadcasting only makes sense when d is within the 1-hop communication range R, m is calculated to ensure that the value of 32·e − d m is not less than 0.5 when t is equal to R. Similarly, t should not exceed the cruise duration of any vehicle. Thus, n is calculated to ensure that the value of 32 · e − t n is not less than 0.5 when t reaches 1 minute which is the average result of three cities in Figure 6a. As described in Algorithm 1, the whole process of our delay-based broadcast algorithm consists of three steps. First, the local vehicle checks whether it has the highest priority to forward the current packet. If the vehicle is the PR of directional interface specified in the interest or labeled as the actual interest forwarder in the matching PIT entry, the vehicle sets h to be true and broadcasts the packet immediately. Otherwise, it sets h to be false and performs the second step of delay estimation according to formula 4 and 5. Finally, the vehicle suspends the procedure according to the principle of LBT, and does not broadcast the packet until t times out. V. MOBILITY SUPPORT Although effectively reducing the redundancy of the whole network by introducing DDIM, COMPASS still faces many problems in high-mobility traffic scenarios. In addition to the common packet loss phenomenon, the failure of directional interface and the maintenance of FIB and PIT are two important issues worthy of discussion. A. INTERFACE REMAPPING In COMPASS, the interest and named data packets are forwarded respectively according to FIB and PIT. Since both of them are built on basis of DDIM, vehicles should adjust their interface divisions once the driving directions have changed. To maintain the effectiveness of interfaces, we propose an interface remapping method for moving vehicles. Our remapping method aims to help the vehicle rebuild the relationship between the neighboring nodes and directional interfaces after the driving direction has changed dramatically. As shown in Figure 16, the steering angle θ is defined as the change degree in driving direction along the clockwise. Based on the value of θ, the relationship between the directional interfaces before and after the driving direction change can be built in Table 5. All the cases can be grouped into the one-to-one and one-to-two mappings. First, if a vehicle makes a perfect right turn, U-turn and left turn, the value of θ will be exactly equal to π/2, π and 3π/2. At this time, the relationship between those interfaces is a standard oneto-one mapping. Second, if θ falls into the other intervals, the node which belongs to one interface before may appear in two adjacent interfaces. For example, both m and n are within Figure 16. After O turns θ degree to the right, node n remains within the F-INF but m belongs to the L-INF. Thus, there is a one-to-two mapping between the interfaces before and after the direction change. To achieve efficient forwarding in highly dynamic scenarios, all the FIB and PIT entries should be updated after each remapping. Based on the value of θ, all the D-INTs fields of FIB and PIT entries need to reset according to Table 5. It is worth noting that the remapping operation usually introduces a new interface due to the one-to-two mapping. Therefore, each remapping operation is very likely to expand the forwarding area of packet. For a FIB or PIT entry, if it has experienced more than one remapping, all the four interfaces may be labeled in the D-INTs field. In this case, the forwarding by leveraging directional interface degenerates into the normal broadcasting. To avoid this problem, every entry in FIB and PIT is required to perform remapping only once. In COM-PASS, the IS_RMP field is added for marking remapping operation. In FIB management, if the IS_RMP has been set before, the entire entry will be removed in the next remapping operation because it will no longer serve as a valuable reference for interest forwarding. Differently in PIT management, the entry with marked IS_RMP is required to set all the bits of D-INTs so that the named data will be returned in all the possible directions. In addition to how to make remapping, when to trigger remapping is another important issue. In urban traffic environment, the normal operations of vehicles in motion state can be divided into two categories. One makes a small change in the driving direction, such as changing lanes and overtaking vehicles. The other causes a large change which breaks the mapping between directional interfaces and neigboring vehicles, such as steering and turning around. Thus, a vehicle should accurately identify its operations before triggering interface remapping. By leveraging a large number of vehicle sensors, the on-board computer can acquire a variety of realtime data periodically, such as the state of steering wheel and four tires. Based on those sensing data, machine learning technology can be fully used to identify the operations which changes the driving direction dramatically. Since how to build a machine learning model is not the focus of this paper, we only provide an idea for future studies. In order to perform interface remapping in our simulation, we adopt a simplified method. Since more than 95% of the turning angles are above 60 degrees in Figure 6, we set it as the threshold for triggering the remapping operations at vehicles. B. FIB AND PIT MANAGEMENT Due to the high mobility of vehicles, the management of FIB and PIT is critical to ensure the efficient transmission of named data. In addition to performing interface remapping after the driving direction changes, COMPASS requires all the nodes to update their FIBs and PITs after receiving the interest and named data packets. As mentioned in subsection IV-B1, a vehicle needs to adjust its PIT after receiving an interest. If there is no VOLUME 8, 2020 result found by the principle of exact matching (EM), a new PIT entry will be created, and the D-INTs field is also set according to the incoming interface of interest. Besides, the PEL field records the lifetime t L of current PIT entry. To ensure that the consumer can obtain the named data timely before it loses interest, the value of t L is determined by three parts in formula 6. T is the interest duration maintained by the consumer, t INT is the transmission time of interest from the consumer to the current node, and t DAT is the transmission time of named data. Assuming that t DAT is approximately equal to t INT , the value of t L can be estimated by T − 2 · t INT . In addition, if there is a matching PIT entry, the vehicle needs to check whether the incoming interface of current copy has been labeled in the D-INTs field. If this interface has not been included, the vehicle should reset the D-INTs field for the new interface. Otherwise, no modification is required. Finally, since the vehicle forwards the interest by referring to the matching FIB entry, the sequence number of reference FIB entry also needs to record in the FIB_E_ID field of PIT entry for subsequent updates. After receiving the named data, the vehicle should adjust its PIT and FIB in turn. First, it looks for the matching PIT entry by name. If there is no result found, it indicates that the interest has not been received, or the matching entry has been removed due to timeout failure. Accordingly, the vehicle only needs to drop the named data packet. If a matching PIT entry is found, the named data should be forwarded to the incoming interfaces of matching interest. Second, the vehicle should also adjust the FIB by leveraging the named data. In this step, it needs not only to create an entry for the new data but also update the matching entry that already exists in FIB. The vehicle looks up the FIB by the principle of EM. If it has not obtained the named data before, a new entry is added in the FIB. Otherwise, the vehicle continues to check whether the incoming interface has been included in the D-INTs field of matching FIB entry. If the data copy comes from a new interface, the vehicle marks the corresponding bit of D-INTs field for the interface and records the last-hop relay as its PR. In addition to creating or updating the matching FIB entry, the vehicle should also manage the reference FIB entry for the subsequent data transmission. By leveraging the FIB_E_ID field of PIT entry, the vehicle can easily find the reference FIB entry used for interest forwarding before. Similarly, the vehicle checks whether the incoming interface of named data has been included by the D-INTs field and whether the relay at last hop has been recorded as the PR. If both two conditions are satisfied, the reference FIB entry can be retained without any modification. If only the relay changes, the PR of corresponding interface should be replaced by the new relay. Otherwise, a new interface should be set in the D-INTs field with corresponding PR. In order to update FIB continuously, we rewrite the removing rules of PIT entry. Different from the original NDN, the intermediate node is required to retain every PIT entry even after receiving and forwarding the matching data. According to the FIB_E_ID field, it can quickly locate the reference FIB entry. Only after the lifetime t L times out, this PIT entry can be removed. At this time, if the local node has not received the matching data from the interest outgoing interfaces, it will remove those interfaces from the reference FIB entry. Different from PIT, there is not a lifetime field designed for the FIB entry. Thus, a FIB entry is removed only in two cases. First, if the FIB entry has already performed interface remapping, it should be directly removed when the next remapping is triggered. Second, if all the outgoing interfaces are removed due to the transfer failure, the FIB entry will be no longer valuable consequently. VI. SIMULATION In this section, we first introduce the simulation settings and quality metrics, then verify that our DDIM is more suitable for urban traffic scenarios than the static directional interface model (SDIM) mentioned in subsection III-B, and finally compare COMPASS with three state-of-the-art protocols, original NDN [12], LAPEL [24] and Navigo [20]. A. SIMULATION SETUP For realistic VNDN simulations, we adopt the network simulator ns-3 [37] with two critical modules, WAVE and ndnSIM [38]. WAVE is an overall system architecture for vehicular communications, which can implement the physical and link layers according to IEEE 802.11p [1] and provide the interface for the upper layer in protocol stack. ndnSIM is designed as a network-layer protocol model which can not only support all the functions of original NDN protocol but also run on top of any available MAC-layer protocol model. In our simulation, both modules are installed on all the nodes with default configuration parameters. Table 6 lists the main parameters of our simulation, which involve three aspects of physical layer, link layer and traffic scenario. To evaluate the performance of protocols under different road planning, we generate two distinct traffic scenarios by leveraging MOVE [39] which is a visualization tool based on SUMO [40]. In Figure 17, both scenarios follow the Manhattan mobility model with an area of 4 × 4 km 2 , but their road plannings are quite different. The roads in Scenario 1 follow the north-south and east-west directions, while the roads in Scenario 2 are along the northeast-southwest and southeastnorthwest directions. For both scenarios, every road is set to a 4-lane dual carriageway, and all the intersections are equipped with traffic signals. For individual differences, we set 6 vehicle types including sedan, SUV, pickup, van, bus and truck. Each one has its own physical parameters such as size, acceleration, etc. Thus, vehicle mobility varies between different types. There are totally 400 vehicles with randomly assigned types and routes making up 16 traffic flows in our simulation scenarios. It is worth noting that all the vehicles are not only able to perform the basic operations of acceleration, deceleration and turning, but also follow the traffic rules, such as stop at red, move at green, and keep their speed within 60 km/h. Besides, 4 roadside units (RSU) are deployed at the T-intersections in Figure 17. In simulation, we randomly select the consumer from those vehicles and set it to publish the interests at a fixed frequency. The providers with different contents are placed at the RSUs to meet the interests of passing vehicles. In order to evaluate the four VNDN transmission protocols comprehensively, we introduce five quality metrics, ISR, PS, TD, HPS, IST. Their definitions are described as follows. • ISR is the average interest satisfaction ratio of all the consumers randomly selected in simulation. • PS is the average PIT size of all the vehicle nodes participating in the named data transmissions. • TD is the average transmission delay of all the successful named data transmissions. The delay is calculated as the round-trip time of disseminating interest and retrieving named data. • HPS is the average hops of all the interests arriving at the providers. • IST is the average interest sending times of all the transmissions including both successful and failed cases. B. DDIM vs. SDIM To verify the advantages of DDIM, we make performance comparison between two versions of COMPASS. One uses our DDIM and the other adopts the SDIM mentioned in subsection III-B. Although the area around each node is divided into 4 interfaces in both models, the essential difference between them is that SDIM fixes the interfaces in the direction of north, east, south and west, while DDIM adjusts the division whenever vehicles' driving direction changes. In COMPASS, we not only redesign the data structures, forwarding strategies and broadcast algorithm, but also propose the interface remapping method and other update strategies to support high-mobility traffic scenarios. All of these constitute our COMPASS. Thus, the legend of 'DDIM' in Figures 18 and 19 indicates that all the nodes adopt the complete COMPASS protocol. To make an objective comparison, we replace DDIM by SDIM in the protocol stack of all nodes. Besides, the data structures and forwarding strategies are preserved, but the interface remapping is not adopted because SDIM does not allow interface adjustment at all. Thus, the legend of 'SDIM' means that all the nodes perform COMPASS with above settings. Given the impact of path planning on protocol performance, we implement the comparative experiments respectively in the two scenarios described in Figure 17. Figures 18 and 19 respectively show the metrics of DDIM and SDIM in two scenarios. Since the directional interfaces provided by those two models are similar in Scenario 1, their performance is almost identical. First of all, as the interest frequency at the consumers increases, the ISRs of DDIM and SDIM gradually decline. As shown in Figure 18a, the ISRs of two models are maintained above 85% when the frequency is within 10 packets per second (pps). Once the frequency reaches 30 pps, the values drop to about 55%. In VNDN, high interest frequency introduces a mass of packets which intensify the competition of wireless channel and increase the probability of packet collision. Therefore, the decline of ISRs in Figure 18a is inevitable. Due to the packet loss caused by channel competition, both the TDs and ISTs of two models increase with the increment of interest frequency. In Figure 18b, it is clear that the values of TDs rise significantly once the frequency becomes higher than 15 pps. Second, as the interest frequency increases, all the nodes in the network receive more and more interests. Consequently, both DDIM and SDIM obtain a growth curve of PIT size in Figure 18c. Finally, since the network topology is fixed, the route of interest from the consumer to the data provider does not change as the frequency increases. This is confirmed by the results of HPS showed in Figure 18d. In Scenario 2, the fixed interfaces provided by SDIM cannot properly match the road directions, while DDIM is able to adjust its interface division for named data transmission. As a result, DDIM is superior to SDIM in all metrics. First, although the ISRs decline with the increment of interest frequency, DDIM obtains higher values than SDIM in all cases. In Figure 19a, the curve of SDIM drops rapidly, but DDIM obtains nearly the same performance as it performs in Scenario 1. Similarly, all the TDs of DDIM are less than those of SDIM in Figure 19b, and the ISTs of DDIM are also larger than those of SDIM in all cases. Second, due to the poor division of SDIM, a large number of available interests are determined to be invalid and discarded at the intermediate nodes. Therefore, all the PSs of SDIM are less than those of DDIM in Figure 19c. Finally, although the consumers can retrieve named data by SDIM, part of interests have to make a detour due to the failed mappings. Therefore, SDIM introduces extra hops throughout the whole transmission progress. The results in Figure 19d also confirm this. Additionally, stability is also an important indicator to evaluate our models. In the two groups of comparative experiments, the consumers are randomly selected from 400 vehicle nodes to repeat the simulation 20 times. In consideration of the differences in the transmission performance between nodes, the numerical distribution of every metric is described in Figure 18 and 19. In Scenario 1, DDIM obtains nearly the same performance as SDIM, but it provides more stable named data transmission. In Scenario 2, DDIM is much better than SDIM in all the metrics as well as the stability. In conclusion, DDIM can provide strong support for named data transmission in vehicular environments. C. PERFORMANCE COMPARISON To verify the advantages of COMPASS, we compare it with the other three state-of-the-art protocols. First, the original NDN is set as the baseline scheme and labeled as O-NDN. In our experiments, all the details of O-NDN are implemented as described in [12], such as data structure and forwarding strategy. Second, two state-of-the-art VNDN protocols are selected for comparison. In Navigo [20], the consumer can locate the area where the data provider may appear potentially by leveraging geo-interfaces and plan the shortest path before sending its interests. In our experiments, it is assumed that the provider can be located successfully every time by Navigo, but it is impossible to achieve in practice. As introduced in section II, LAPEL [24] is an upgrade of CODIE [23], which not only limits the number of hops in the data retrieving process but also sets the lifetime of PIT entries dynamically. Since the four protocols are not affected by the road directions, the simulation with randomly selected consumers runs 20 times only in Scenario 1, and their results of ISR, TD, PS, HPS and IST are plotted in Figure 20. Figure 20a describes the ISRs of four protocols with the increment of interest frequency. First, except for 1 pps and 5 pps, COMPASS gets the highest values among all the protocols. For interest propagation, COMPASS limits the directions by leveraging DDIM, while O-NDN and LAPEL allow interest flooding in all directions. Therefore, COM-PASS cannot obtain as many opportunities as those two protocols to satisfy the interests, which fully explains its lower ISRs in the low frequencies. As the interest frequency increases, the number of packets throughout the network soars. As a result, the fierce competition of wireless channel leads to more and more serious packet loss. At this time, the unrestricted interest propagation becomes the Achilles heel of O-NDN and LAPEL, and their ISRs fall rapidly. Benefitting from DDIM, COMPASS can effectively alleviate packet loss and get the best results at the high frequencies. Besides, due to the limited number of hops for named data retrieving process, the advantage of multipath transmission is weakened by LAPEL, especially in the highly competitive wireless channel. Once the transmission along a single path fails, the consumer may no longer be able to obtain the matching data. Therefore, LAPEL's ISR is even lower than O-NDN in Figure 20a. Navigo follows the multi-hop broadcasting method of O-NDN, but both the interest and named data must be transmitted along a planned path. As the packets generated by Navigo are much fewer than those by O-NDN and LAPEL, it gets the higher ISRs than these two protocols at high frequencies. However, Navigo spreads the interest in the preset direction, and it does not have multiple directional interfaces for selection in each hop. Therefore, it underperforms COMPASS in Figure 20a. TD is another important indicator for performance evaluation. It is worth noting that in our simulations all the protocols adopt the same interest retransmission mechanism provided by ndnSIM. By default, the consumer is allowed to send the same interest 3 times. Whenever an interest times out, the consumer should resend it by setting a larger retransmission timeout (RTO) until the interest has been sent enough times. It is obvious that TD is positively related to IST. Figure 20b and 20c respectively show the TDs and ISTs of four protocols with respect to the interest frequency. Since COMPASS does not introduce as many transmission times as O-NDN and LAPEL, its TDs are lower than those two protocols. Affected by serious packet loss, the IST of O-NDN rises sharply as the interest frequency increases, and its TD also becomes intolerable rapidly. Similar to O-NDN, LAPEL does not alleviate the packet loss problem very well. Because of the limitation on transmission hops, it indeed costs fewer ISTs and less TDs than O-NDN when the interest frequency is below 15 pps. However, the defect of single data retrieving path makes its IST become larger than that of O-NDN once the interest frequency reaches 20 pps. Correspondingly, it also gets the highest TDs in the cases of 20 pps, 25 pps and 30 pps. Benefiting from the path planning for both interest and named data, Navigo effectively reduces the probability of packet collision. As a result, it performs better than O-NDN and LAPEL on IST and TD. Compared with COMPASS, Navigo obtains the lower TDs at low frequencies. However, flooding in a single path makes it more likely to lose packets than COMPASS once the interest frequency increases. In Figure 20c, the ISTs of Navigo are larger than those of COMPASS, and the gap increases with the frequency. As a result, more interest retransmissions make the TD of Navigo become higher than COMPASS from 15 pps. Figure 20d describes the PSs of four protocols as interest frequency increases. Benefiting from the path planning for both interest and named data, Navigo limits the packet number throughout the network and gets the smallest PS. Although quite different from Navigo, COMPASS is also able to limit the interest flooding and reduce the number of redundant packets. In Figure 20d, its results are only larger than Navigo. Conversely, due to the unrestricted broadcasting method, O-NDN gets the worst results. As shown in Figure 20d, its PS increases significantly with the increment of interest frequency. In LAPEL, a dynamic algorithm is designed for adjusting the PIT lifetime so as to remove the VOLUME 8, 2020 expired entries in time. Therefore, the PS of LAPEL is much smaller than O-NDN in all cases. The HPSs of four protocols are described in Figure 20e. Since the planning path brings great benefits to Navigo, it obtains the fewest HPS at all frequencies. In our COM-PASS, the delay-based distributed broadcast algorithm tries to select the stable nodes which are far away from the local nodes. Therefore, its HPS is maintained within 10, only worse than Navigo. It should be noted that Figure 20e shows the hops of interest received by the providers rather than the hops of named data in the retrieving process. LAPEL indeed limits the hops of data transmission, but it makes the same interest flooding as O-NDN. Therefore, LAPEL does not perform much better than O-NDN on HPS in Figure 20e. VII. CONCLUSION In this paper, we present that the lack of valid underlying interface is a key reason resulting in inefficient named data transmission of VANETs. To deal with this problem, we propose a novel protocol and name it as COMPASS. Compared with previous work, the contribution of COMPASS can be summarized in three aspects. First, the vehicles' movement patterns in urban traffic scenarios are explored by analyzing three taxi GPS trajectory datasets, and a dynamic directional interface model (DDIM) is built in NDN layer that associates the virtual named data space with the actual network space. Second, the data structures and forwarding strategies are redesigned on basis of DDIM to specify the broadcasting area of interest and named data at each hop during transmission progress. Besides, an improved delay-based distributed broadcast algorithm is proposed to speed up the whole transmission by allowing high-priority relays to make 0-delay forwarding. Finally, to enhance the robustness of COMPASS in high-mobility environment, an interface remapping method is designed for updating the FIB and PIT entries after driving direction has changed, and the update strategies of FIB and PIT after receiving an interest or named data packet are also improved. To verify the advantages of COMPASS, we make comparison experiments on ndnSIM. The results show that (1) the DDIM adopted by COMPASS is more suitable than SDIM for complex urban traffic environment. (2) COMPASS outperforms the other three state-of-the-art protocols since it achieves high performance with low overhead. Therefore, COMPASS is a feasible solution for efficient named data transmission in VANETs.
16,192.4
2020-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Fabrication of Superconducting Nanowires Using the Template Method The fabrication and characterization of superconducting nanowires fabricated by the anodic aluminium oxide (AAO) template technique has been reviewed. This templating method was applied to conventional metallic superconductors, as well as to several high-temperature superconductors (HTSc). For filling the templates with superconducting material, several different techniques have been applied in the literature, including electrodeposition, sol-gel techniques, sputtering, and melting. Here, we discuss the various superconducting materials employed and the results obtained. The arising problems in the fabrication process and the difficulties concerning the separation of the nanowires from the templates are pointed out in detail. Furthermore, we compare HTSc nanowires prepared by AAO templating and electrospinning with each other, and give an outlook to further research directions. Introduction Superconducting nanowires are interesting mesoscopic 1-dimensional (1D) objects for many reasons. Nanostructuring superconducting materials may show effects which are not known from the respective bulk materials: As examples, one may mention Pb films, where the transition temperature was found to show an oscillating behavior, depending on the thickness of the superconducting film [1] and Al nanowires, exhibiting size-dependent breakdowns of superconductivity [2]. Other superconducting materials, such as Ga, In, or Tl, may exhibit increased values of the superconducting transition temperature, T c , when being prepared as quasi 0-dimensional (0D) nanoparticles of ultra-fine nanogranular materials [3][4][5][6][7][8]. From a fundamental point of view, superconductivity is characterized by two critical lengths [9,10], the London penetration depth, λ L , and the coherence length, ξ, so one may prepare nanowires with at least one dimension below one of these characteristic lengths. A wire is classified as 1D if its diameter d is smaller than the superconducting coherence length ξ. As result, the superconducting properties may be different from what is obtained from bulk samples of the same composition, and quantum fluctuations [11,12] may dominate the superconducting properties [13,14]. From an application point of view, the interest in superconducting nanowires is pushed by the continuing drive for miniaturization in the electronics industry demanding the reduction of heat dissipation. This may require the use of superconducting interconnects between the semiconducting circuits [15]. Furthermore, the use of superconducting nanowires as functional elements in sensors, e.g., single photon detectors, is another big field of interest, especially in the field of quantum photonics [16][17][18][19]. And, finally, a new challenging and promising application for semiconducting or superconducting nanowires may be the hosting of qubits for quantum computing with improved stability [20]. In conventional superconductors, such as Al, Pb, Sn, Nb, NbN, MoGe, and others, in thin film or nanowire form, the coherence length is ξ ∼ 5-100 nm, which is typically 10-1000 times the Fermi wavelength. In such wires, the wave function of the Cooper pairs only depends on the position along the wire, while it is independent of the position within the wire cross section. A nanowire is classified as quasi-one-dimensional (quasi-1D) if the diameter is of the order of d < π √ 2ξ. This condition ensures that vortices, having a core diameter of 2ξ, are not energetically stable in the wire. This means that vortices cannot penetrate into the nanowire, and the nanowire remains in the Meissner state even in large magnetic fields. Therefore, the superconducting order parameter is approximately constant within the cross section of the nanowire. Since ξ diverges at the critical temperature, T c , it is not too complicated to fabricate a nanowire which is quasi-1D near T c . Thus, the fabrication and characterization of such nanowires, mostly obtained by lithographical techniques from thin films on substrates, were discussed extensively in the books by Bezryadin [13] and Altomare [14]. In the case of high-temperature superconductors (HTSc), the superconducting parameters λ L and ξ are fundamentally different, as ξ is in the lower nanometer range (typical dimensions are ξ ab (0) ∼1.3 nm) and λ L (0) is very large (∼130 nm), owing to the fact that the Ginzburg-Landau parameter κ GL is very large for the HTSc materials [21]. Thus, the fabrication of 1D-nanowires of HTSc materials is a large challenge, bringing the commonly used lithography techniques to their limits. The HTSc materials prepared in nanowire form comprise mainly YBa 2 Cu 3 O 7−δ (YBCO) and Bi 2 Sr 2 CaCu 2 O 8+δ (Bi-2212), both being the most studied cuprate HTSc in the literature. Furthermore, the influence of the chosen substrate on the superconducting properties of the resulting nanowires may play an important role, so substrate-free, freestanding nanowires are desired for many characterization measurements. For this reason, approaches of nanotechnology became very interesting to the community. Several types of metallic and oxidic materials were prepared in nanowire form using track-etched polymer membranes and the alumina template (anodic aluminium oxide (AAO)) approach. This research was reviewed already in References [22][23][24][25][26][27]. Starting from the year 1997, magnetic and superconducting materials were fabricated using the template approach [28][29][30][31]. Both classes of materials share the presence of critical lengths, such as the domain wall width and the superparamagnetic limit for magnetic materials, and λ L and ξ for superconducting materials. Due to the actual research on patterned media for magnetic storage, magnetic materials (metallic ones, as well as ceramic-based ones) were fabricated as nanowires within the templates but also as films on top of the templates to serve as simple and cheap means to fabricate patterned media. This provided the base for research concerning superconducting nanowires. Here, it is important to mention that the templating approach is not limited to the aforementioned track-etched polymer membranes (soft templates) and the AAO templates (hard templates) but other types of templates, such as carbon nanotubes, block copolymers, biologic nanostructures, and others, were also used in the literature to grow magnetic and superconducting nanowires. The fabrication processes of AAO templates and the self-organization of the pores were already extensively reviewed in the literature [32][33][34][35], and the fabrication of AAO-templated magnetic nanowires by electrodeposition was recently reviewed by Piraux [36]. Thus, in the present review, we focus on the fabrication processes of superconducting nanowires using mainly the AAO template approach, discuss the various problems appearing in the preparation and separation of the nanowires, and give an outlook to further research. Basic Ideas The basic idea of the fabrication of superconducting nanowires via the AAO template approach is to fill the template with superconducting material. Figure 1 presents SEM images of empty AAO templates. Figure 1a (top view) and Figure 1b (cross section) stem from a self-prepared template with the Al-layer at the bottom, and Figure 1c (top view) and Figure 1d (cross section) belong to a commercial AAO template (Whatman anodisc™ [37]). Now, the pores may be filled up using various possible techniques, such as electrodeposition, sol-gel processes, sputtering, and melting, were reported in the literature [38][39][40][41]. With some apparative arrangements, classic deposition techniques, such as evaporation or sputtering, may also be employed to fill the templates [42]. This is illustrated schematically in Figure 2a-d. As electrodeposition [43] (a) is a quite flexible technique, the fabrication of multi-layered systems, such as Nb/Cu nanowires is also possible. The combination of electrodeposition and AAO templates is, thus, a very useful one, but is mainly done on conventional superconductors, even though YBCO can also be fabricated by electrodeposition [44,45]. The filling of the AAO pores using the sol-gel approach (b) is very interesting especially for ceramic materials, such as the HTSc. Figure 2c shows the arrangement for sputtering as an example. In this situation, the pores may not be completely filled with material, and there will be in all cases a remaining top layer of the sputtered material, which must be properly removed before preparing any electric contacts. Figure 2d illustrates the melting approach. Here, the material to be filled into the templates is put on top, and the temperature is increased above the melting point. The then molten material may enter into the pores of the template. This approach has been used in the literature to fabricate YBCO nanowires. Besides the filling of the nanopores with a superconducting material, the hexagonal lattice of the pores in the AAO templates is very similar to the heagonally ordered vortex lattic formed in type-II superconductors in high magnetic fields. Thus, there is a possibility to study matching effects of the vortex lattice in a superconducting film evaporated on top of the AAO template with the lattice of the AAO pores. In this case, there is no need to remove anything from the templates similar to the patterned media in magnetism. These matching effects can even be enhanced when filling the AAO templates with a magnetic material, such as Ni, Fe, or Co, and then covering this template with a superconducting film. Therefore, this second approach has also attracted researchers to these effects. Of course, such experiments were carried out using conventional metallic type-II superconductors, such as Nb, NbN, or MoGe [46][47][48][49][50][51], as AAO is not a reasonable template for HTSc thin films. These experiments will be discussed in Section 3.4 below. Conventional Superconductors In this section, we will have a look at the various conventional superconductors prepared in nanowire form using AAO templates. Dubois et al. [29] and Yi and Schwarzacher [31] were the first to report on superconducting Pb nanowires applying the templating technique; both groups using track-etched polycarbonate membranes, the electrodeposition technique and Pb as superconducting material. Figure 3 presents SEM images of superconducting nanowires of metallic type, such as Pb (a), Sn (b), Zn (c), Pb/Cu (d), and Ga (e) [8,38,[52][53][54]. For all these elemental superconductors, the wire diameter is clearly below the critical dimensions; thus, electric transport measurements may reveal the 1D character. Note that most of these elemental superconductors are also type-I superconducting materials. Consequently, already, the first experiment on superconducting nanowires [29] showed a large enhancement of the critical field of the Pb nanowire arrays fabricated. Further work focused then on effects of the 1D nature of the nanowires, manifested by a non-zero resistance, which represents different stages within the superconducting state. A possible explanation of this behavior is the formation of phase-slip centers when the current of a magnetic field destroys superconductivity. Thus, electric transport measurements (magneto-resistance, I/V-characteristics are very important to analyze these properties of the nanowire arrays or of extracted, individual nanowires [52,55,56]. Therefore, in Ref. [57], the group from Louvain described a method based on nanolithography techniques to prepare proper electric contacts to the nanowire arrays, which is useful for all electric transport measurements on the nanowire arrays. Zhang and Dai [58] fabricated 45 nm-diameter Pb nanowires by electrodopsition in AAO templates. They found the Pb nanowires to exhibit fcc structure and a structurally uniform behavior. The anisotropic magnetic properties (magnetic fields applied parallel and perpendicular to the AAO template) were measured by SQUID magnetometry, revealing the behavior of Pb as a type-II superconductor with a T c just below 7 K. Furthermore, they found flux entry and exit to the sample being inhibited above the lower critical field, H c1 , when increasing/descreasing the applied magnetic field. Multilayered superconductors (mainly Nb/Cu thin films) were intensively investigated in the literature [59,60], so it was very straightforward to prepare electrodeposited, multilayered nanowires using the AAO template approach. de Menten de Horne [61] applied a single bath technique to fabricate Pb/Cu multilayered nanowires and achieved a relatively good control of the geometrical parameters with Cu layers as thick as 10 nm. A very specific pattern of the magnetoresistance was obtained in low magnetic fields, which is likely to be caused by the proximity effects, demonstrating the interesting physics behind such multilayered nanowires. Li et al. [53] have applied DC magnetron sputtering to prepare Sn nanowire arrays in AAO templates and measured magnetic moments as function of temperature and field at temperatures down to 2 K. Furthermore, electron microscopy was employed to study the resulting microstructures in detail. Two different types of morphologies were obtained due to control of the substrate temperature during the sputtering process. The Sn film deposited onto the AAO template at room temperature was found to exhibit a wet property and produced cross-linked Sn nanotube arrays. In contrast to this behavior, isolated Sn nanotube arrays were obtained with an increased substrate temperature. This finding demonstrates that sputtering can also be employed to fabricate superconducting nanowire arrays, and, eventually, multilayered systems, as well, such as Sn-Pt. Even Ru nanowires were fabricated by Wang et al. [62] using the template approach, employing commercial, track-etched polycarbonate membranes with 30 and 50 nm pores. The Ru nanowires were found to be polycrystalline, consisting of ultrasmall grains with only 2 nm diameter. The electric transport measurements performed showed that the nanowires were metallic, but no superconductivity was found at temperatures down to 0.3 K (the expected T c value of Ru bulk material is 0.51 K), which may due to the very small grain size. Samples of nanostructured β-Ga wires were successfully prepared by a novel method of metallic-flux nanonucleation in AAO templates by Moura et al. [8], allowing the determination of several superconducting parameters via magnetic measurements. The authors could well describe the Ga nanowires as a weak-coupling type-II-like superconductor with a Ginzburg-Landau parameter κ GL = 1.18, favorized by the nanoscopic scale of the Ga nanowires. This result, including the measured relatively high T c of 6.2 K, is in stark contrast to pure bulk Ga, which is a type-I superconductor with a T c of ∼1.08 K [63,64]. Thus, it is worth it to investigate the T c increase as a function of nanowire diameter in future works. Another interesting experiment concerning superconductivity and AAO templates was carried out by Haruyama et al. [65], who synthesized multi-walled carbon nanotubes (MWNTs) in AAO templates using chemical vapor deposition (CVD). The rigid AAO template served as holder to cut off the ends of the MWNTs by ultrasound, enabling to evaporate gold/Nb electrodes to the open ends of the MWNTs. The appearance of proximity-induced superconductivity and supercurrents in the MWNTs evidenced the high quality of the contacts prepared. To summarize this section, it is obvious that nanowires of elemental superconductors are very interesting objects providing new physics of the superconducting state, so the research in this direction is ongoing and will include even more materials, such as In, Sn using the metallic-flux nanonucleation technique, or some types of metallic alloys. High-Temperature Superconductors (HTSc) To prepare nanowires of HTSc materials, the AAO template approach was also used several times in the literature to grow Two approaches were employed, the sol-gel technique and the melting one. The solgel route is very useful to prepare ceramic materials in a controlled manner, and many experiments are described in the literature to grow HTSc superconducting materials in this way [39,[66][67][68][69][70]. The melting approach uses pre-prepared HTSc powder on top of the template, which is then heat-treated above the melting temperature. This approach offers the possibility to reduce the melting temperature by using superconducting nanopowders. However, there are two main problems arising: (i) the necessary etching away of the AAO template material after the fabrication process is difficult without affecting the HTSc material, and (ii) the Al 2 O 3 material itself may have an effect on the resulting superconducting properties of the nanowires to to diffusion of Al into the HTSc cell. The latter point was found already during the first preparation of YBCO single crystals using Al 2 O 3 crucibles [71], when the first large YBCO crystals obtained showed only a T c of ∼65 K. A consequence of this is that the contact time of a molten HTSc with the AAO template should be as short as possible, which excludes the application of a temperature program for the growth of single crystalline material. Furthermore, the AAO templates are hardly stable in such a process. Therefore, single crystal-type nanowires cannot be fabricated using the melting approach, and the resulting nanowires are polycrystalline with many small HTSc grains. The former point is even more problematic: It is practically impossible to find an etching solution which does not affect the HTSc material itself. All the etchants employed to extract the superconducting metallic nanowires or the magnetic nanowires described in the literature will also attack the HTSc material. So, the best solution for investigation of the superconducting properties is to study the nanowire arrays in the entire filled template without attempting to extract the nanowires. A well-suited method to remove and cut superconducting nanowires is the focused ionbeam milling (FIB) technique [72]. However, it the case of the templated superconductors, it is only possible to cut some sections, but it is unsuitable to remove the nanowires from the templates. In case of HTSc superconductors, the alumina is also not a well-suited substrate material as Al may substitute for Cu in the Cu-O-planes, as already mentioned before. Xu et al. [39] applied the sol-gel route to fill the AAO pores. Their main achievement is the finding that, at certain temperatures of the gels, single-crystalline YBCO nanowires could be obtained, which is very interesting for electric and magnetic measurements. The sol with dispersed blue colloidal particles was kept at a temperature of 70 • C, and the AAO template was dipped into the hot sol. As consequence, a well crystallized YBCO phase was obtained at about 700 • C, which is the lowest temperature reported in the literature compared to the temperatures applied in the ceramic method [73] and other sol-gel processes for YBCO [39,70,74]. Thus, this reduction of the fabrication temperature is a very important step in order to fabricate YBCO nanowires with the AAO templates. However, in their paper, these authors did not show any kind of measurement of the superconducting properties of the nanowires. Dadras and Aawani [70] also dipped AAO templates into an YBCO sol, as well as applied the melting approach with pre-prepared YBCO powder. For both types of samples, they could successfully remove the nanowires from the templates by NaOH etching. Zhang et al. [74] were employing the sol-gel route and discussed the growth process of YBCO within the AAO templates in detail. In their optimized sol-gel process based on the Pechini method, molecular level mixing was carried out in the form of Y-Ba-Cu-EDTA (ethylenediamine tetraacetic acid) complex and the network subsequently formed by esterification of ethylene glycol and the metal-EDTA complex. Lai et al. [75] successfully used the sol-gel technique and AAO templates to prepare Bi-2212 nanowires (Figure 4). The magnetically determined (zero-field cooling (ZFC), 10 Oe) transition temperature was 84 K, which is close to the bulk value. Furthermore, the authors presented magnetization measurements at various temperatures (2, 5, 20, and 50 K) in the field range ±70 kOe, which show all the features of a polycrystalline material. The magnetic signals at more elevated temperatures were too small to be measured with their setup. Li et al. [40] were the first to apply the filling of the AAO templated by melting liquid in order to produce YBCO nanowire arrays. These authors attempted to separate the YBCO nanowires from the templates but failed to do so. Thus, they could only characterize the superconducting properties of the entire nanowire arrays using an AC susceptibility technique and found the T c ∼91 K, which directly corresponds to the bulk value. However, the superconducting transition width is very broad and incomplete down to 80 K. The measured T c and the x-ray data confirmed that YBCO is properly formed in this melting approach. To summarize this section, we can state that is well possible to fabricate YBCO and Bi-2212 nanowire (arrays) using the AAO template approach. All authors characterized the microstructure of their samples using electron microscopy (SEM, EDX, TEM) and X-ray diffraction ( Figure 5), but more intensive analysis was not done due to the problems of extracting the HTSc nanowires from the templates. Thus, electric transport measurements or magnetization data were also hardly collected, and if, then only for full nanowire arrays. Thus, the possible interesting physics of such HTSc nanowires has not yet been addressed properly in the literature. Filling Commercial AAO Templates with High-T c Superconducting Materials The use of commercially available AAO templates (Whatman anodisc™ [37]) for the fabrication of HTSc nanowires is quite attractive, offering a possible nanowire diameter as small as 20 nm, which would bring the nanowire diameter close to the critical dimensions, especially at elevated temperatures close to T c . Figure 6 gives a schematic view of the differences between the commercial AAO templates and the self-fabricated ones. As the anodisc templates are open on both sides according to their main use as nanofilters, there is no need to remove the Al/Al 2 O 3 layer at the bottom. Using the melting approach, the missing bottom layer is not a problem as the template can be placed on a suitable underlayer. The templates have a thickness of 50 µm and an overall diameter of 27 mm, and the nanopores run through the entire thickness of the template. Thus, we attempted to grow YBCO and NdBa 2 Cu 3 O y (NdBCO) nanowires using the anodisc templates, and the results were published in Refs. [41,76,77]. Pre-reacted YBCO and NdBCO powders were employed, ground to an average particle size of about 1 µm by ball-milling. The preparation of such powders was described by Hari Babu et al. [78]. An Al 2 O 3 plate was placed on top of the template/powder arrangement, and at the bottom, a plate of the "green phase" Y 2 BaCuO 5 (Y-211) was used to have a reaction-free underlayer. The entire heat treatment took place in a standard laboratory box-type furnace. The heat treatment was chosen similar to the single crystal growth process [79], but with a longer holding time of the maximum temperature (1050 • C for YBCO, 1100 • C for NdBCO) in order to ensure the complete melting of the powder. For the following electric and magnetic measurements, an oxygenation step was applied (450 • C, 12 h, flowing O 2 ). The resulting HTSc-filled AAO templates were found to be very brittle and break easily into several pieces. Thus, for the further handling steps, the pieces of the HTSc-filled templates were glued on Macor™ plates with GE varnish, enabling even mechanical grinding and polishing of the HTSc-filled templates. However, the microstructural analysis of the HTsc-filled templates showed that there are hardly differences between experiments using the 20 nm or 100 nm templates. This was also seen by other researchers trying to prepare PbTiO 3 nanotubes with these AAO templates [80]. A typical result of our analysis is shown in Figure 7. The size of the pores in the commercial templates is always ∼100 nm (the pore diameters on the top surface tend to be somewhat smaller as at the bottom surface, and, in the center section, the pore size is much larger [76]), and only some constrictions in the center of the template define the smaller nominal diameter, as indicated by arrows. Thus, we have to conclude that the pore diameters measured were mostly larger than the nominal values, especially for the templates with a nominal pore diameter of 20 nm. As this is fully reasonable for the intended use of the AAO templates as nanofilters, we must conclude here that nanowires can only be fabricated with a diameter of ∼150 nm using the commercial templates. Figure 8 shows the microstructural analysis of YBCO-filled AAO templates (a,b) and the extracted nanowires (c). After several unsuccessful experiments, the YBCO nanowires could be separated from the AAO template by chemical etching with a 4 mol/l NaOH solution (holding time for at least 1 h). The diameter of the nanowires is not homogeneous [76], but the pieces of the nanowires obtained are free of cracks. The length of the nanowire pieces is typically about 2-10 µm. Fianally, Figure 9 presents the electric transport measurements performed on such YBCO and NdBCO nanowire arrays [77]. The measured superconducting transition temperatures, T c , are 89 K (YBCO) and 94 K (NdBCO), respectively. So, both values correspond to the T c -values of the bulk counterparts, even though the transition widths are relatively broad. The inset in Figure 9 illustrates the electrical connections to the nanowire array provided by two Au layers on top and bottom of the AAO template. Thus, the melting of HTSc is a feasible method to fill the AAO templates in a straightforward experiment. Any effect of Al diffusion to affect the transition temperature of the resulting HTSc nanowires was not observed, but the removing of the template material to extract the HTSc nanowires is a tedious process, so it is more useful to analyze the entire nanowire arrays in the electric and magnetic measurements. Templates to Introduce Defect Structures in thin Films In this section, we discuss the common work of the groups of Piraux and Moshchalkov. Here, the AAO templates were not intended to be filled with superconducting material but served as structured substrates to introduce defects in a superconducting film evaporated on top of the template. This work was initiated by a first work of Welp et al. [81] on Nb films on top of the AAO templates and is similar to previous work covering AAO templates with, e.g., permalloy films to achieve a patterned media for magnetic storage. In case of a superconducting film on top of the AAO membrane, there is a striking similarity of the hexagonal vortex lattice with the hexagonal lattice of the AAO pores. Thus, one can expect matching effects as the distances in the vortex lattice are tunable on applying magnetic field perpendicular to the template surface according to with Φ 0 denoting the magnetic flux quantum, B the external magnetic field, and a 0 the intervortex spacing [82]. In Refs. [46][47][48][49][50][51] large scale superconducting antidot arrays were grown from Sisupported anodized alumina substrates. Figure 10a In Figure 11a-f, the vortex patterns are shown, and, in Figure 11g,h, corresponding magnetic data (normalized critical current versus magnetic field) are presented, taken from Ref. [49]. The fields H 1/2 , H 1 , etc., are the matching fields seen in the magnetic data, and images in Figure 11a-f show the corresponding vortex arrangements. The transport and magnetization measurements performed in these works [46][47][48][49][50][51] have established the existence of pronounced matching effects (peaks in the M-H-diagrams) in applied magnetic fields up to 700 mT at temperatures as low as 5.7 K. The critical current density of the films was shown to be increased by two orders of magnitude. A similar experiment was carried out by Ye et al. [83], who fabricated a Co-nanorod array by the AAO template approach and covered this one with a Pb/Bi superconducting film. Hysteretic superconducting properties and increased critical current density in the superconducting film were revealed in the magnetization measurements, and, for explanation of these effects, the domain structure of the Co-nanowire array was employed. This work nicely demonstrates the importance of magnetic pinning centers in order to increase the flux pinning properties of superconducting materials. To conclude this section, we may state that the use of AAO templates (filled with magnetic material, or in pure form) to fabricate superconducting films on an anti-dot lattice brought up interesting experiments concerning the flux pinning properties of mostly, conventional metallic superconductors. However, the fabrication of so-called hybrid magnetic/superconducting systems [84] would also be possible using HTSc films plus AAO templates filled with magnetic material to study the flux pinning enhancements. Discussion In general, the templating approach to prepare superconducting nanowires has produced several interesting results published in the literature, but mainly on the conventional metallic superconductors, where the AAO material can be removed from the nanowires straightforwardly by etching. In addition, for these materials, the parameters of the pores enable to produce true 1D nanowires, so the influences of the thermal activated phase slips (TAPS) and the quantum phase slips (QPS) could be nicely demonstrated. More recent experiments also addressed the effects of T c enhancement by nanostructuring, and novel filling approaches (metallic-flux nanonucleation [8]) were developed, so we may expect more interesting physics to be revealed in various other, nano-patterned superconducting materials in the near future. The other approach of producing superconducting films on top of the AAO membranes producing antidot lattices to increase flux pinning also generated very nice results showing matching effects between the hexagonal pore lattice and the magnetic field-tunable flux-line lattice. For the HTSc materials, the problems with separating the nanowires from the templates must be pointed out as the main problem obscuring the wider use of the AAO templates to produce HTSc nanowires. Thus, electrospinning [85][86][87] and solution-blow spinning [88,89] have completely overrun the template approach, enabling the growth of longer and homogeneous HTSc nanowire fabrics, from which individual pieces can easily be cut off using FIB [72,90]. Furthermore, the nanowire fabrics themselves have interesting electric and magnetic properties [91][92][93][94][95][96][97][98][99][100][101], which may lead to specific applications which are not possible with other types of HTSc materials. Figure 12a-c present a comparison of AAO-templated YBCO nanowires (a, taken from Ref. [77]), electrospun Bi-2212 nanowires (b, data from Saarbrücken) and solution blow-spun YBCO nanowires (c, Reference [99]). Note here the much longer length of the individual nanowires prepared by electrospinning (the resulting nanowires produced by solution blow-spinning are quasi identical) as compared to the ones by AAO templating. Furthermore, the separation of individual nanowire pieces from the prepared nanowire fabric is simple, applying the FIB technique. The problem with the spinning approach remains, however, in the large nanowire diameter of 200-500 nm, which is far too big to obtain true, 1D-HTSc nanowires. Thus, future research is required to find templates suitable to produce HTSc nanowires in the 10 nm range. [77] with such prepared by the electrospinning technique ((b), Bi-2212) as measured in Ref. [95] and solution blow-spun nanowires ((c), YBCO) [99]. Note the obvious difference in the length scale. Several methods were applied in the literature to analyze the microstructure of the template-prepared nanowires. Among these are SEM, EDX, (high-resolution) TEM, selected area electron diffraction (SAED), and XRD, but a detailed investigation concerning the grain boundary misorientation and an eventual texture is missing in the literature. In case of the conventional superconductors, there was not much work done in this direction, and, for HTSc nanowires, the problems arising due to the difficulties in removing the nanowires from the templates have prevented the application of an analysis technique, such as the electron backscatter diffraction (EBSD). In contrast, on electrospun nanowires, EBSD and its further development, transmission Kikuchi diffraction (TKD, or some times called t-EBSD) [102] has revealed the existence of a fiber-like texture of the superconducting Bi-2212 and ferromagnetic (La,Sr)MnO 3 nanowires [103,104]. Another unsolved issue concerning the sol-gel-derived, templated HTSc nanowires is the analysis of the eventually remaining solvent within the template, as no thermogravimetric data are published in the literature. Furthermore, magneto-optic imaging (MO [105][106][107]) of flux penetration into the superconducting nanostructures was not attempted in any experiment, neither magnetic force microscopy (MFM), as was done on magnetic nanowires [108]. In this direction, there is still plenty of work to be done in the future. For the analysis of the superconducting properties, electric transport, and magnetic measurements on the nanowires are required, preferably on individual nanowires. In the case of HTSc materials, the fabrication of proper electric contacts to an individual nanowire is quite complicated due to the nanowire surfaces, which are inhomogeneous. So, even for the electrospun nanowires, which can be handled by FIB, the evaporation of Pt contacts required several attempts to produce reasonably good electric contacts with low resistance [90]. Another problem prevails with the magnetic data. The usually employed measurement systems (SQUID, VSM, or AC susceptibility) always require a substantial amount of superconducting material, which excludes the measurement of an individual nanowire. Thus, the use of a specifically developed, low-temperature cantilever magnetometer would be highly desirable [109] to be applied in the research on superconducting nanowires. In the review of Piraux on magnetic nanowires [36], it was demonstrated that the electrodeposition technique, together with the AAO templates, enables the fabrication of a variety of nanowire-based architectures. Furthermore, the template-assisted electrodeposition provides the control of the chemical composition (e.g., multi-layered nanowires), the density/spatial control (arrangements of the nanowires within the template, e.g., crossed nanowires), and the shape control (nanowires, nanotubes, nanowires with constrictions, aspect ratio). Many of these features are welcome for magnetic nanowires; for example, nanowires with selected aspect ratio can be produced for the use as magnetic elements in ferrofluids [110,111]. In addition, the magnetic reversal properties of the nanowires is directly influenced by the shape and aspect ratio [108,[112][113][114][115][116]. Furthermore, this excellent control enables tuning of the magnetic, magneto-transport, and thermoelectric properties of the resulting nanowires or nanowire arrays. Most of these advantages have not (yet) been explored for superconducting nanowires. However, there is still a demand for even smaller nanowire diameters for both conventional metallic, as well as the HTSc superconducting, materials. Thus, a variety of ideas was already discussed in the literature using other template materials, influenced by the progress of nanotechnology. These ideas comprise DNA sections as templates [117], molecular templates [118], and biomimetic templates from chitosan [119]. Other proposals made include self-assembled Si templates [120], combinations of porous Si with high-resolution electron beam lithography [121], block copolymer double gyroid-derived ceramic templates [122], and zeolite [123]. Using these new approaches, new superconducting materials were also obtained in nanowire form, including MgB 2 and δ3-MoN [124]. Very recently, DNA origami as template material was used by Shani et al. [125] to prepare NbN nanowires. All these new approaches clearly demonstrate that the templating technique is still very interesting to fabricate superconducting nanowires with smaller and smaller diameters, and more superconducting materials can also be produced in nanowire form, which may lead to more new interesting physics. Other developments concern the growth technique itself, such as vapor-solid growth [126] and the metallic-flux nanonucleation technique [8]. Finally, we give some comments on possible applications of nanowires, nanowire arrays, and nanowire fabrics. As the conventional metallic superconducting nanowires can be easily separated from the templates, these nanowires can be applied as sensor elements or connecting wires in electronic circuits, as mentioned already at the beginning. An application of a superconducting nanowire array still within its template has not yet been described in the literature. For the templated HTSc nanowires, there are no applications envisaged in the literature, nor for the nanowire arrays. Undoubtedly, in contrast, the electrospun or blow-spun HTSc nanowires have a variety of possible applications in the form of the nanowire network fabrics, as described in Refs. [127][128][129], but the superconducting transition temperature, T c , being too close to the application temperature of 77 K (temperature of liquid nitrogen), still hinders the planned applications, such as the "superconducting carpet" [127] or applications, as shielding materials until nanowires with higher T c (and, of course, higher critical current density, j c ) are fabricated in nanowire form. Conclusions To conclude, the preparation of superconducting nanowires using the AAO template route is working successful for conventional metallic nanowires, whereas, for the preparation of HTSc nanowires, the problem of removing the nanowires from the templates prevails. Furthermore, the preparation of superconducting thin films on top of the AAO templates gave interesting results due to matching effects of the pore lattice with the flux line lattice. Again, the same approach does not work properly for the HTSc as Al 2 O 3 is not a suitable substrate for HTSc thin film growth. Thus, the AAO template approach for the fabrication of nanowires of conventional metallic and HTSc superconducting materials is feasible to grow nanowires, but there is still a demand for smaller nanowire diameters, which may reveal new physics, e.g., concerning a possible increase of T c or the stabilization of uncommon crystallographic modifications in nanowire form, so different types of templates are discussed in the literature.
8,353.8
2021-07-31T00:00:00.000
[ "Physics" ]
Edition 1.1 of the PARSEME Shared Task on Automatic Identification of Verbal Multiword Expressions This paper describes the PARSEME Shared Task 1.1 on automatic identification of verbal multiword expressions. We present the annotation methodology, focusing on changes from last year’s shared task. Novel aspects include enhanced annotation guidelines, additional annotated data for most languages, corpora for some new languages, and new evaluation settings. Corpora were created for 20 languages, which are also briefly discussed. We report organizational principles behind the shared task and the evaluation metrics employed for ranking. The 17 participating systems, their methods and obtained results are also presented and analysed. This paper describes edition 1.1 of the PARSEME Shared Task, which builds on this momentum.We amalgamated organizational experience from last year's task, a more polished version of the annotation methodology and an extended set of linguistic data, yielding an event that attracted 12 teams from 9 countries.Novel aspects in this year's task include additional annotated data for most of the languages, some new languages with annotated datasets and enhanced annotation guidelines. The structure of the paper is the following.First, related work is presented, then details on the annotation methodology are described, focusing on changes from last year's shared task.We have annotated corpora for 20 languages, which are briefly discussed.Main organizational principles behind the shared task, as well as the evaluation metrics are reported next.Finally, participating systems are introduced and their results are discussed before we draw our conclusions. Related Work In the last few years, there have been several evaluation campaigns for MWE identification.First, the 2008 MWE workshop contained an MWE-targeted shared task.However, the goal of participants was to rank the provided MWE candidates instead of identifying them in raw texts.The recent DiMSUM 2016 shared task (Schneider et al., 2016) challenged participants to label English sentences in tweets, user reviews of services, and TED talks both with MWEs and supersenses for nouns and verbs.Last, the 1.0 edition of the PARSEME Shared Task in 2017 (Savary et al., 2017) provided annotated datasets for 18 languages, where the goal was to identify verbal MWEs in context.Our current shared task is similar in vein to the previous edition.However, the annotation methodology has been enhanced (see Section 3) and the set of languages covered has also been changed.Rosén et al. (2015) reports on a survey of MWE annotation in 17 treebanks for 15 languages, collaboratively documented according to common guidelines.They highlight the heterogeneity of MWE annotation practices.Similar conclusions have been drawn for Universal Dependencies (McDonald et al., 2013).With regard to these conclusions, we intended to provide unified guidelines for all the participating languages, in order to avoid heterogeneous, hence incomparable, datasets. Enhanced Annotation Methodology The first PARSEME annotation campaign (Savary et al., forthcoming) generated a rich feedback from annotators and language team leaders.It also attracted the interest of new teams, working on languages not covered by the previous version of the PARSEME corpora.About 80 issues were raised and discussed among dozens of contributors.1This boosted our efforts towards a better understanding of VMWErelated phenomena, and towards a better synergy of terminologies across languages and linguistic traditions.The annotation guidelines were gradually enhanced, so as to achieve more clear-cut distinctions among categories, and make the decision process easier and more reliable.As a result, we expected higher-quality annotated corpora and better VMWE identification systems learned on them. Definitions We maintain all major definitions (unified across languages) introduced in edition 1.0 of the annotation campaign (Savary et al.,forthcoming,Sec. 2).In particular, we understand multiword expressions as expressions with at least two lexicalized components (i.e.always realised by the same lexemes), including a head word and at least one other syntactically related word.Thus, lexicalized components of MWEs must form a connected dependency graph.Such expressions must display some degree of lexical, morphological, syntactic and/or semantic idiosyncrasy, formalised by the annotation procedures. As previously, syntactic variants of MWE candidates are normalised to their least marked form (called the canonical form) maintaining the idiomatic reading, before it is submitted to linguistic tests.A verbal MWE is defined as a MWE whose head in a canonical form is a verb, and which functions as a verbal phrase, unlike e.g.FR peut-être 'may-be'⇒'maybe' (which is always an adverbial).As in edition 1.0, we account for single-token VMWEs with multiword variants, e.g.ES hacerse 'make-self'⇒'become' vs. se hace 'self makes'⇒'becomes'. Typology Major changes in the annotation guidelines between edition 1.0 and 1.1 include redesigning the VMWE typology, which is now defined as follows:2 1.Two universal categories, that is, valid for all languages participating in the task: (a) LIGHT VERB CONSTRUCTIONS (LVC), divided into two subcategories: i. LVCs in which the verb is semantically totally bleached (LVC.full),DE eine Rede halten 'hold a speech'⇒'give a speech', ii.LVCs in which the verb adds a causative meaning to the noun (LVC.cause),3e.g.PL narazić na straty 'expose to losses' (b) VERBAL IDIOMS (VID),4 grouping all VMWEs not belonging to other categories, and most often having a relatively high degree of semantic non-compositionality, e.g.LT našta gula ant savivaldybiu ˛pečiu ˛'the burden lies on the shoulders of the municipality'⇒'the municipality is in charge of the burden' 2. Three quasi-universal categories, valid for some language groups or languages, but not all: (a) INHERENTLY REFLEXIVE VERBS (IRV)5 -pervasive in Romance and Slavic languages, and present in Hungarian and German -in which the reflexive clitic (REFL) either always cooccurs with a given verb, or markedly changes its meaning or subcategorisation frame, e.g. PT se formar 'REFL form'⇒'graduate' (b) VERB-PARTICLE CONSTRUCTIONS (VPC) -pervasive in Germanic languages and Hungarian, rare in Romance and absent in Slavic languages -with two subcategories: i. fully non-compositional VPCs (VPC.full),6 in which the particle totally changes the meaning of the verb, e.g.HU berúg 'in-kick'⇒'get drunk' ii.semi non-compositional VPCs (VPC.semi),7 in which the particle adds a partly predictable but non-spatial meaning to the verb, e.g.EN wake up (c) MULTI-VERB CONSTRUCTIONS (MVC)8 -close to semantically non-compositional serial verbs in Asian languages like Chinese, Hindi, Indonesian and Japanese (but also attested in Spanish), e.g.HI kar le 'do take'⇒'do (for one's own benefit)', kar de 'do give'⇒'do (for other's benefit)' 3.One language-specific category, introduced for Italian: (a) INHERENTLY CLITIC VERBS (LS.ICV),9 in which at least one non-reflexive clitic (CLI) either always accompanies a given verb or markedly changes its meaning or its subcategorisation frame, e.g.IT prenderle 'take-them'⇒'get beaten up' 4. One optional experimental category, to be considered in the post-annotation step: (a) INHERENTLY ADPOSITIONAL VERBS (IAV) -they include idiomatic combinations of verbs with prepositions or post-positions, depending on the language, e.g.HR ne do de do usporavanja 'it will not come to delay'⇒'no delay will occur'10 Decision tree for annotation Edition 1.0 featured a two-stage annotation process, according to which VMWEs were supposed to be first identified in a category-neutral fashion, then classified into one of the VMWE categories.Since the annotation practice showed that VMWE identification is virtually always done in a category-specific way, for this year's task we constructed a unified decision tree, shown in Fig. 1. Annotation process and decision tree We propose the following methodology for VMWE annotation: Step 1 -identify a candidate, that is, a combination of a verb with at least one other word which could form a VMWE.If the candidate has the structure of a meaning-preserving variant, the following steps apply to its canonical form.This step is largely based on the annotators' linguistic knowledge and intuition after reading this guide. Step 2 -determine which components of the candidate (or of its canonical form) are lexicalized, that is, if they are omitted, the VMWE does not occur any more.Corpus and web searches may be required to confirm intuitions about acceptable variants. Step 3 -depending on the syntactic structure of the candidate's canonical form, formally check if it is a VMWE using the generic and category-specific decision trees and tests below.Notice that your intuitions used in Step 1 to identify a given candidate are not sufficient to annotate it: you must confirm them by applying the tests in the guidelines. Step 4 (experimental and optional) -if your language team chose to experimentally annotate the IAV category follow the dedicated inherently adpositional verb (IAV) tests.These tests should always be applied once the 3 previous steps are complete, i.e. the IAV overlays the universal annotation. The decision tree below indicates the order in which tests should be applied in step 3.The decision trees are a useful summary to consult during annotation, but contain very short descriptions of the tests.Each test is detailed and explained with examples in the following sections. Generic decision tree If you are annotating Italian or Hindi, go to the Italian-specific decision tree or Hindi-specific decision tree.For all other languages follow the tree below. Consistency checks Due to manpower constraints, we could not perform double annotation followed by adjudication.For most languages, only small fractions of the corresponding corpus were double-annotated (Sec.4.2).Therefore, in order to increase the consistency of the annotations, we applied the consistency checking tool developed for edition 1.0 (Savary et al.,forthcoming,Sec. 5.4).The tool provides an "orthogonal" view of the corpus, where all annotations of the same VMWE are grouped and can be corrected interactively.Previous experience showed that the use of this tool greatly reduced noise and silence errors.This year, almost all language teams completed the consistency check phase (with the exception of Arabic). Corpora For edition 1.1, we prepared annotated corpora for 20 languages divided into four groups: • Germanic languages: German (DE), English (EN) • Romance languages: Spanish (ES), French (FR), Italian (IT), Portuguese (PT), Romanian (RO) • Balto-Slavic languages: Bulgarian (BG), Croatian (HR), Lithuanian (LT), Polish (PL), Slovene (SL) • Other languages: Arabic (AR), Greek (EL), Basque (EU), Farsi (FA), Hebrew (HE), Hindi (HI), Hungarian (HU), Turkish (TR) Arabic, Basque, Croatian, English and Hindi were additional languages, compared to the first edition of the shared task.However, the Czech, Maltese and Swedish corpora were not updated and hence were not included in edition 1.1 of the shared task.The Basque corpus comprises texts from the whole UD corpus (Aranzabe et al., 2015) and part of the Elhuyar Web Corpora. 13The Bulgarian corpus comprises news articles from the Bulgarian National Corpus (Koeva et al., 2012).The Croatian corpus contains sentences from the Croatian version of the SETimes corpora: mostly running text but also selected fragments, such as introductory blurbs and image descriptions characteristic of newswire text.The English corpus consists of 7,437 sentences taken from three of the UD: the Gold Standard Universal Dependencies Corpus for English, the LinES parallel corpus and the Parallel Universal Dependencies treebank.The Farsi corpus is built on top of the MULTEXT-East corpora (QasemiZadeh and Rahimi, 2006) and VMWE annotations are added to a portion of Orwell's 1984 novel.The French corpus contains the Sequoia corpus (Candito and Seddah, 2012) converted to UD, the GDS French UD treebank, the French part of the Partut corpus, and part of the Parallel UD (PUD) corpus.The German corpus contains shuffled sentences crawled from online news, reviews and wikis, derived from the WMT16 shared task data (Bojar et al., 2016), and Universal Dependencies v2.0.The Greek corpus comprises Wikipedia articles and newswire texts from various on-line newspaper editions and news portals.The Hebrew corpus contains news and articles from Arutz 7 and HaAretz news websites, collected by the MILA Knowledge Center for Processing Hebrew.The Hindi corpus represents the news genre sentences selected from the test section of the Hindi Treebank (Bhat et al., 2015).The Hungarian corpus contains legal texts from the Szeged Treebank (Csendes et al., 2005).The Italian corpus is a selection of texts from the PAISÁ corpus of web texts (Lyding et al., 2014), including Wikibooks, Wikinews, Wikiversity, and blog services.The Lithuanian corpus contains articles from a Lithuanian news portal DELFI.The Polish corpus builds on top of the National Corpus of Polish (Przepiórkowski et al., 2011) and the Polish Coreference Corpus (Ogrodniczuk et al., 2015).These are balanced corpora, from which we selected mainly daily and periodical press extracts.The Portuguese corpus contains sentences from the informal Brazilian newspaper Diário Gaúcho and from the training set of the UD_Portuguese-GSD v2.1 treebank.The Romanian corpus is a collection of articles from the concatenated editions of the Agenda newspaper.The Slovenian corpus contains parts of the ssj500k 2.0 training corpus (Krek et al., 2017), which consists of sampled paragraphs from the Slovenian reference FidaPLUS corpus (Arhar Holdt et al., 2007), including literary novels, daily newspapers, web blogs and social media.The Spanish corpus consists of newspaper texts from the the Ancora corpus (Taulé et al., 2016), the UD version of Ancora, a corpus compiled by the IXA group in the University of the Basque country, and parts of the training set of the UD v2.0 treebank.The Turkish corpus consists of 18,611 sentences of newswire texts in several genres. As shown in Table 2, most languages provided corpora containing several thousand VMWEs, totalling 79,326 VMWEs across all languages.The smallest corpus is in English, containing around 7,437 sentences and 832 VMWEs, and the largest one is in Hungarian, with 7,760 VMWEs.All corpora, except the Arabic one, are available under different flavours of the Creative Common license.14 Format Edition 1.1 of the shared task saw a major evolution of the data format, motivated by a quest for synergies between PARSEME (Savary et al., forthcoming) and Universal Dependencies (Nivre et al., 2016), two complementary multilingual initiatives aiming at unified terminologies and methodologies.The new format called cupt, combines in one file the conllu format15 and the parsemetsv format 16 As seen in Fig. 2, each token in a sentence is now represented by 11 columns: the 10 columns compatible with the conllu specification (notably: rank, token, lemma, part-of-speech, morphological features, and syntactic dependencies), and the 11th column containing the VMWE annotations, according to the same conventions as parsemetsv but with the updated set of categories (cf.Sec.3.2).Note the presence of an IRV (tokens 2-3) embedded in a VID (tokens 2-5).The underscore '_', when it occurs alone in a field, is reserved for underspecified annotations.It can be used in incomplete annotations or in blind versions of the annotated files.The star '*', when it occurs alone in a field, is reserved for empty annotations, which are different from underspecified.This concerns sporadic annotations, typical for VMWEs (where not necessarily all words receive an annotation, as opposed to e.g.part-of-speech tags). Besides adding a new column to conllu, cupt also introduces additional conventions concerning comments (lines starting with '#').The first line of each file must indicate the ordered list of columns (with standardized names) that this file contains, i.e. the same format can be used for any subset of standard columns, in any order.Each sentence is then preceded by the identifier of the source sentence (source_sent_id) which consists of three fields: (i) the persistent URI of the original corpus (e.g. of a UD treebank), (ii) the path of the source file in the original corpus, (iii) the sentence identifier, unique within the whole corpus.Items (i) and (ii) contain '.' if there is no external source corpus, as in the example of Figure 2. The following comment line contains the text of the current sentence.Validation scripts and converters were developed for cupt, and published before the shared task. Inter-Annotator Agreement Contrary to standard practice in corpus annotation, most corpora were not double-annotated due to lack of human resources.Nonetheless, each language team has double-annotated a sample containing at least 100 annotated VMWEs. 17The number of sentences (S), number of VMWEs annotated by the first (A 1 ) and by the second annotator (A 2 ) are shown in Table 1.The last three columns report two measures to assess span agreement (tokens belonging to a VMWE) and one measure to assess the agreement on Observed and expected agreement for κ span are based on the number of verbs V in the sample, assuming that a simplification of the task consists of deciding whether each verb belongs to a VMWE or not. 19If annotators perfectly agree on A 1=2 annotated VMWEs, then we estimate that they agree on As for κ cat , we consider only the A 1=2 VMWEs on which both annotators agree on the span, and calculate P O and P E based on the proportion of times both annotators agree on the VMWE's category label. Inter-annotator agreement scores can give an idea of the quality of the guidelines and of the training procedures for annotators.We observe a high variability among languages, especially for determining the span of VMWEs, with κ span ranging from 0.227 for Spanish to 0.984 for Turkish.Macro-averaged κ span is 0.691, which is superior to the macro-averaged κ unit reported in 2017, which was of 0.58 (Savary et al., 2017). 20Categorization agreement results are much more homogeneous, with a macro-average κ cat of 0.836, which is also slightly higher than the one obtained in 2017, which was of 0.819. The variable agreement values observed could be explained by language and corpus characteristics (e.g.web texts are harder to annotate than newspapers).They could also be explained by the fact that the double-annotated samples are quite small.Finally, they could indicate that the guidelines are still vague and that annotators do not always receive appropriate training.In reality, probably a mixture of all these factors explains the low agreement observed for some languages.In short, Table 1 strongly suggests that there is still room for improvement in (a) guidelines, (b) annotator training, and (c) annotation team management, best practices, and methodology.It should also be noted that lower agreement values may correlate with the results obtained by participants: the lower the IAA for a given language (i.e. the more difficult the task is for humans), the lower the results of automatic MWE identification.Nevertheless, we believe that the systematic use of our in-house consistency checks tool helped homogenizing some of these annotation disagreements (Sec.3.4). Shared Task Organization Each language in the shared task was handled by a team that was responsible for the choice of subcorpora and for the annotation of VMWEs, in a similar setting as in the previous edition.For each language, we then split its corpus into training, test and development sets (train/test/dev), as follows: • If the corpus has less than 550 VMWEs: Take sentences containing 90% of the VMWEs as test, and the other 10% as a small training corpus.• If the corpus has between 550 and 1500 VMWEs: Take sentences containing 500 VMWEs as test, and take the rest for training.• If the corpus has between 1,500 and 5,000 VMWEs: Take sentences containing 500 VMWEs as test, take sentences containing 500 VMWEs as dev, and take the rest for training.• If the corpus has more than 5,000 VMWEs: Take sentences containing 10% of the VMWEs as test, take sentences containing 10% of the VMWEs as dev, and take the remaining 80% for training.As in edition 1.0, participants could submit their systems to two tracks: open and closed.Systems in the closed track were only allowed to train their models on the train and dev files provided. In this edition, we distinguished sentences based on their origin, so as to make sure that the fraction of each sub-corpus is the same in all splits for each language.For example, around 59% of all Basque sentences came from UD, while the other 41% came from the sub-corpus Elhuyar.We have made sure that similar percentages also applied to test/train/dev when taken in isolation.Due to this balancing act, for most languages, we could not keep the VMWEs in the same split as in edition 1.0. Evaluation Measures The goal of the evaluation measures is to represent the quality of system predictions when compared to the human-annotated gold standard for a given language.As in edition 1.0, we define two types of evaluation measures: a strict per-VMWE score (in which each VMWE in gold is either deemed predicted or not, in a binary fashion); and a fuzzy per-token score (which takes partial matches into account).For each of these two, we can calculate precision (P), recall (R) and F 1 -scores (F). Orthogonally to the type of measure, there is the choice of what subset of VMWEs to take into account from gold and system predictions.As in the previous edition, we calculate a general category-agnostic measure (both per-VMWE and per-token) based on the totality of VMWEs in both gold and system predictions -this measure only considers whether each VMWE has been properly predicted, regardless of category.We also calculate category-specific measures (both per-VMWE and per-token), where we consider only the subset of VMWEs associated with a given category. We additionally consider the following phenomenon-specific measures, which focus on some of the challenging phenomena specifically relevant to MWEs (Constant et al., 2017): • MWE continuity: We calculate per-VMWE scores for two different subsets: continuous e.g.TR istifa edecek 'resignation will-do'⇒'he/she will resign', and discontinuous VMWEs e.g.SL imajo investicijske načrte 'they-have investment plans'⇒'they have investment plans'.(2) it is not identical to another VMWE, i.e. the training corpus does not contain the sequence of surface-form tokens as seen in this VMWE (including non-lexicalized components in between, in the case of discontinuous VMWEs).E.g., BG накриво ли беше стъпил is a variant of стъпя накриво 'to step to the side'⇒'to lose (one's) footing'.Systems may predict VMWEs for all languages in the shared task, and the aforementioned measures are independently calculated for each language.Additionally, we calculate a macro-average score based on all of the predictions.In this case, the precision P for a given measure (e.g. for continuous VMWEs) is the average of the precisions for all 19 languages.Arabic is not considered due to delays in the corpus release.Missing system predictions are assumed to have P = R = 0.The recall R is averaged in the same manner, and the average F score is calculated from these averaged P and R scores. System Results For the 2018 edition of the PARSEME Shared Task, 12 teams submitted 17 system results: 13 to the closed track and 4 to the open track.No team submitted system results for all 20 languages of the shared task, but 11 teams covered 19 languages (all except Arabic).Detailed result tables are reported on the shared task website. 21In the tables, systems are referred to by anonymous nicknames.System authors and their affiliations are available in the system description papers published in these proceeings. As for the best performing systems, TRAPACC and TRAVERSAL were ranked first for 8 languages and 7 language, respectively.TRAVERSAL is more effective in Slavic and Romance languages, whereas TRAPACC works well for German and English.In the "Other" language group, GDB-NER achieved the best results for Farsi and Turkish, and CRF approaches proved to be the best for Hindi.The best results for Bulgarian were obtained by varIDE, based on a Naive Bayes classifier. Results per language show that, Hungarian and Romanian were the "easiest" languages for the systems, with best MWE-based F-scores of 90.31 and 85.28, respectively.Hebrew, English and Lithuanian show the lowest MWE-based F-scores, not exceeding 23.28, 32.88 and 32.17, respectively.This is likely due to the amount of annotated training data: Hungarian had the highest, whikle English and Lithuanian the lowest, number of VMWEs in the training data.A notable exception to this tendency is Hindi, where good results (an F-score of 72.98) could be achieved building on a small amount of training data.This is probably due to the high number of multi-verb constructions (MVCs) in Hindi, which are usually formed by a sequence of two verbs, hence relatively easily identified by relying on POS tags. Table 12 shows the effectiveness of MWE identification with regard to MWE categories.The highest F-scores were achieved for IRVs (especially for Balto-Slavic languages).This might be due to the fact that the IRVs tend to be continuous and must contain a reflexive pronoun/clitic, therefore the presence of such a pronoun in the immediate neighborhood of a verb is a strong predictor for IRVs.The LVC.full category is present in all languages.Interestingly, they are most effectively identified in the "Other" language group.Idioms occur in the test corpora of almost all languages (except Farsi), and they can be identified to the greatest extent in Romance languages.VPCs seem to be the easiest to find in Hungarian. In regards to phenomenon-specific macro-average results (Tables 4 to 11), let us have a closer look at the F 1 -MWE measure of the 11 systems which submitted results to all 19 languages, except MWE-TreeC (whose results are hard to interpret).The differences are: (i) from 13 to 28 points (17 points on average) for continuous vs. discontinuous VMWEs, (ii) from 14 to 43 points (27 points on average) for multitoken vs. single-token VMWEs, (iii) from 45 to 56 points (50 points on average) for seen-in-train vs. unseen-in-train VMWEs, and (iv) from 13 to 27 points (20 points on average) for identical-to-train vs. variant-of-train VMWEs.These results confirm that the phenomena they focus on are major challenges in the VMWE identification task, and we suggest that the corresponding measures should be systematically used for future evaluation.The hardest challenge is the one of identifying unseen-in-train VMWEs.This result is not a suprise since MWE-hood is, by nature, a lexical phenomenon, that is, a particular idiomatic reading is available only in presence of a combination of particular lexical units.Replacing one of them by a semantically close lexeme usually leads to the loss of idiomatic reading, e.g.force one's hand 'compel someone to act against her will' is an idiom, while force one's arm can only be understood literally.Few other, non-lexical, hints are given to distinguish a particular VMWE occurrence from a literal expression, because a VMWE usually takes syntactically regular forms.Morphosyntactic idiosyncrasy (e.g. the fact that a given VMWE allows some and blocks some other regular syntactic transformations) is a property of types rather than tokens.We expect, therefore, satisfactory unseen-in-train VMWE identification results mostly from systems using large-scale VMWE lexicons or semi/unsupervised methods and very large corpora. Conclusions and Future Work We reported on edition 1.1 of the PARSEME Shared Task aiming at identifying verbal MWEs in texts in 20 languages.We described our corpus annotation methodology, the data provided to the participants, the shared task modalities and evaluation measures.The official results of the shared task were also presented and briefly discussed.The outputs of individual systems22 should be compared more thoroughly in the future, so as to see how systems with different architectures cope with different phenomena.For instance, it would be interesting to check if, as expected, discontinuous VMWEs are handled better by parsingbased methods vs. sequential taggers, or by LSTMs vs. other neural network architectures. Compared to the first edition in 2017, we attracted a larger number of participants (17 vs. 7), with 11 of the submissions covering 19 languages.We expect that this growing interest in modeling and computational treatment of verbal MWEs will motivate teams working on corpus annotation, especially from new language families, to join the initiative.We expect to maintain and continuously increase the quality and the size of the existing annotated corpora.For instance, we have identified weaknesses in the guidelines for MVCs that will require enhancements.Furthermore, we need to collect feedback about the IAV experimental category, and decide whether we consolidate its annotation guidelines. Our ambitious goal for a future shared task is to extend annotation to other MWE categories, not only verbal ones.We are aware of corpora and guidelines for individual languages (e.g.English or French) and/or MWE categories (e.g.noun-noun compounds).However, a considerable effort will be required to design and apply universal annotation guidelines for the annotation of new MWE categories.We strongly believe that the large community and collective expertise gathered in the PARSEME initiative will allow us to take on this challenge.We definitely hope that this initiative will continue in the next years, yielding available multilingual annotated corpora that can foster MWE research in computational linguistics, as well as in linguistics and translation studies. Appendix B: Shared task results Lang-split Sent. Tok Figure 1 : Figure 1: Decision tree for joint VMWE identification and classification. 11Note that the first 4 tests are structural.They first hypothesize as VIDs those candidates which: (S.1) do not have a unique verb as head, e.g.HE britanya nas'a ve-natna 'im micrayim 'Britain carried and gave with Egypt'⇒'Britain negotiated with Egypt', (S.2) have more than one lexicalized dependent of the head verb, EL ρίχνω λάδι στη φωτιά 'pour oil to-the fire'⇒'make a bad or negative situation feel worse', (S.3) have a lexicalized subject, e.g.EU deabruak eraman 'devil-the.ERG 12 take'⇒'be taken by the devil, go to hell'.The remaining candidates, i.e. those having exactly one head verb and one lexicalized non-subject dependent, trigger category specific tests depending on the part-of-speech of this dependent (S.4). ↳ YES ⇒ Annotate as a VMWE of category IRV ↳ NO ⇒ It is not a VMWE, exit ↳Particle ⇒ Apply VPC-specific tests ⇒ VPC tests positive? ↳ YES ⇒ Annotate as a VMWE of category VPC.full or VPC.semi ↳ NO ⇒ It is not a VMWE, exit ↳Verb with no lexicalized dependent ⇒ Apply MVC-specific tests ⇒ MVC tests positive? ↳ YES ⇒ Annotate as a VMWE of category MVC ↳ NO ⇒ Apply the VID-specific tests ⇒ VID tests positive? , both used in the previous edition of this shared task. Table 1 : span κ span κ cat Per-language inter-annotator agreement on a sample of S sentences, with A 1 and A 2 VMWEs annotated by each annotator.F span is the F-measure between annotators, κ span is the agreement on the annotation span and κ cat is the agreement on the VMWE category.EL, EN and HI provided corpora annotated by more than 2 annotators.We report the highest scores among all possible annotator pairs.VMWE categories (Sec.3.2).The F span score is the MWE-based F-measure when considering that one of the annotators tries to predict the other one's annotations.18This is identical to the F1-MWE score used to evaluate participating systems (Sec.6).F span is an optimistic estimator which ignores chance agreement.On the other hand, κ span and κ cat estimate to what extent the observed agreement P O exceeds the expected agreement P E , that is, • MWE length: We calculate per-VMWE scores for two different subsets: single-token, e.g.We calculate per-VMWE scores for two subsets: seen and unseen VMWEs.We consider a VMWE in the (gold or prediction) test corpus as seen if a VMWE with the same multiset of lemmas is annotated at least once in the training corpus.Other VMWEs are deemed unseen.For instance, given the occurrence of EN has a new look in the training corpus, the occurrence of EN had a look of innocence and of EN having a look at this report in the test corpus would be considered seen and unseen, respectively.•MWE variability: We calculate per-VMWE scores for the subset of VMWEs that are variants of VMWEs from the training corpus.A VMWE is considered a variant if: (i) it is deemed as a seen VMWE, as defined above, and DE anfangen 'at-catch'⇒'begin', ES abstenerse 'abstain-REFL'⇒'abstain', and multi-token VMWEs e.g.FA 'eye throw'⇒'to look at'.• MWE novelty:
7,123.2
2018-08-01T00:00:00.000
[ "Computer Science", "Linguistics" ]
Synthesis and characterisation of new antimalarial fluorinated triazolopyrazine compounds Nine new fluorinated analogues were synthesised by late-stage functionalisation using Diversinate™ chemistry on the Open Source Malaria (OSM) triazolopyrazine scaffold (Series 4). The structures of all analogues were fully characterised by NMR, UV and MS data analysis; three triazolopyrazines were confirmed by X-ray crystal structure analysis. The inhibitory activity of all compounds against the growth of the malaria parasite Plasmodium falciparum (3D7 and Dd2 strains) and the cytotoxicity against a human embryonic kidney (HEK293) cell line were tested. Some of the compounds demonstrated moderate antimalarial activity with IC50 values ranging from 0.2 to >80 µM; none of the compounds displayed any cytotoxicity against HEK293 cells at 80 µM. Antimalarial activity was significantly reduced when C-8 of the triazolopyrazine scaffold was substituted with CF3 and CF2H moieties, whereas incorporation of a CF2Me group at the same position completely abolished antiplasmodial effects. Introduction Malaria is an infectious disease caused by Plasmodium parasites and is a major global threat to human health. The WHO World Malaria Report 2021, estimates 241 million cases of malaria and 627,000 deaths globally in 2020, an increase of 12% from the previous year [1]. The increase was mainly from countries in the WHO African region, which accounted for about 95% of malaria cases and deaths, and was associated with service disruptions during the COVID-19 pandemic [1]. Infants and young children are at a disproportionately high risk of severe malaria and death, as 80% of deaths in this region were children under five [1]. Whilst there are drugs available for the treatment of malaria infections, most have now succumbed to parasite drug resistance and thus reduced clinical efficacy [2,3]. Consequently, new antiplasmodial drugs with novel malaria targets are urgently needed to combat the global problem of parasite drug resistance. For more than 10 years, the Open Source Malaria (OSM) consortium [4] has had an interest in identifying and developing novel antimalarial compounds that belong to a variety of chemotypes, one of which includes the 1,2,4-triazolo [4,3-a]pyrazine scaffold [5]. This particular series, known as OSM Series 4, has demonstrated significant potency against various strains of Plasmodium falciparum (Pf) with IC 50 values as low as 16 nM. The series also showed decent in vitro human liver microsome and human hepatocyte stability, with hepatic intrinsic clearance of <8.1 µL/min/mg [5]. Furthermore, minimal poly-pharmacology and cytotoxicity have been identified for this series to date, giving confidence in its specificity and tolerability, and thus supporting on-going efforts towards the continued development of this unique antimalarial structure class [5]. Through investigations into the mechanism of action of OSM Series 4 compounds, it has been suggested that this nitrogen-rich chemotype inhibits the ATPase, PfATP4 [6]. PfATP4 functions as a Na + /H + -ATPase, which allows the malaria parasite to regulate Na + to maintain cell homeostasis [7][8][9]. Interfering with this process means the parasite is unable to regulate Na + [10], resulting in a significant increase in the acid load of the cell, which can lead to parasite growth inhibition and ultimately parasite death [8,9]. One of the current Series 4 aims includes lead optimisation to improve solubility and metabolic stability while retaining potency [5]. Late-stage functionalisation (LSF) is a strategy involving the use of C-H bonds as chemical handles for the introduction of various functional groups, which has been widely employed by medicinal chemists to generate new analogues of lead compounds without the need for de novo synthesis [5]. Baran et al. has developed an operationally simple, radical-based functionalisation strategy that allows direct transformation of C-H bonds to C-C bonds in a practical manner [11]. This strategy involves the utilisation of sodium and zinc sulfinate-based reagents (marketed by Merck as Diversinates™) to functionalise heteroaromatic C-H bonds of unprotected systems in a variety of solvents at room temperature and without the requirement of an inert atmosphere or solvent purification [11][12][13]. In our previous work on OSM Series 4 scaffolds [14], we had undertaken some preliminary investigations into the use of commercially available Diversinate™ reagents and showed the bicyclic nitrogen-rich core of Series 4 was amenable to this chemistry, with radical sulfinate substitution occurring with high-selectivity at C-8 and in respectable yields. This paper reports additional and more thorough Diversinate™ studies on three phenethyl ether substituted triazolopyrazine scaffolds, with a particular focus on incorporating the fluoro fragments -CF 3 , -CF 2 H and -CF 2 Me into the OSM Series 4 structures via LSF chemistry. The new library of triazolopyrazines were all evaluated in vitro for antimalarial activity and cytotoxicity. Results and Discussion Previous structure-activity relationship (SAR) studies reported that any substitution at the C8 position of Series 4 triazolopyrazines can lower the potency for P. falciparum [14][15][16]. However, a recent preliminary SAR study identified that substitution at the C8 position with trifluoromethane and difluoroethane moieties using Diversinate™ chemistry increased the potency of the parent scaffold (compound 2), suggesting the potential of these fluoroalkyl groups for improving the potency of other promising leads within the OSM project [14]. Fluorinecontaining compounds have exhibited wide applications in pharmaceuticals and agrochemicals -approximately 20% of marketed drugs are fluoro-pharmaceuticals, while for agrochemicals, 53% are fluoro-compounds [17,18]. In recent decades, the introduction of fluorine or a fluorinated functional group into organic compounds has become increasingly prevalent in drug design and development, as fluorine substitution can greatly influence drug potency, pharmacokinetic and pharmacodynamic properties [19]. Therefore, in this study we undertook additional LSF investigations by introduction of fluoroalkyl groups to OSM leads with the aim to probe the SAR of 8-fluoroalkylated triazolopyrazine derivatives and further improve their potency. Based on the existing SAR data for the C3 position of the triazolopyrazine core, substituents with a phenyl ring containing alkyl, cyano, nitro, or halogenated groups at the para-position were crucial for activity [5,16,20]. Thus, compounds 1-3 with a para-phenyl-OCHF 2 , -Cl or -CN substituent at the C3 position of the triazole were selected as scaffolds in this study. In addition, the reported SAR data also indicated that the use of an ether linker on the pyrazine ring, with a two methylene unit chain length between the heterocyclic core and benzylic substituent, improved the potency of these compounds [16]. Hence, scaffolds 1-3 were then converted into a series of ether-linked triazolopyrazines with phenethyl alcohol or ethanol using the standard nucleophilic displacement method as previously described (Scheme 1) [14,16]. Structures of synthesised compounds 4-9 were determined using 1D/2D NMR and HRMS (Supporting Information File 1, S6-S23). Crystals of compounds 5 and 6 were also analysed by X-ray crystallography studies (Supporting Information File 1, S51, S52, and S54), which confirmed the structure assignment. Compounds 4-6 were known OSM compounds that displayed good selective activity with IC 50 values of <1 µM [16,21], whereas 7-9 are new ether derivatives without a phenyl ring that were synthesised for SAR evaluation. The incorporation of fluoroalkyl groups at the C8 position of three OSM leads (4-6) was performed using Diversinate™ chemistry following the previously described method (Scheme 2) [14]. The Diversinate™ reagents used in this study were zinc trifluoromethanesulfinate (TFMS), sodium 1,1-difluoroethanesulfinate (DFES) and zinc difluoromethanesulfinate (DFMS). In brief, a mixture of the respective scaffold, Diversinate™ (2 equiv), and TFA (5 equiv) in DMSO/CH 2 Cl 2 / H 2 O (5:5:2) was stirred for 30 min at room temperature and cooled to 4 °C. Then, aqueous tert-butyl hydroperoxide (TBHP, 70%, 3 equiv) was slowly added over 5 min and stirring continued for 1 h. The mixture was slowly warmed to room temperature with stirring for another 24 h. The products were isolated and purified using C 18 All compounds were tested for their antimalarial activity against P. falciparum 3D7 (chloroquine-sensitive strain) and Dd2 (chloroquine, pyrimethamine and mefloquine drug-resistant strain) (Table 1). In terms of the cLogP values of these compounds, an increase in hydrophobicity did not improve the potency. A similar trend was also observed in a previous study [20], where an increase in the hydrophobicity of several triazolopyrazine derivatives resulted in significant drops in antima-larial activity. However, the same paper also commented that the cLogP values showed no significant correlation with experimental potency when compared to other Series 4 triazolopyrazines [20]. In addition, consistent with reported SAR data [14,16,21], the ether-linked compounds 4-6 exhibited strong activity with IC 50 values of 0.2-1.2 µM, whereas compounds 7-9 with the removal of the phenyl ring from the ether methylene group resulted in a loss of potency at the tested concentrations. For fluorinated compounds, previous studies [14] reported that the introduction of CF 3 and CF 2 Me groups at the C-8 position of scaffold 2 improved the antimalarial activity. In particular, compounds with a CF 2 Me moiety showed a 7.3-fold improvement in potency (IC 50 = 1.7 µM) compared to the parent scaffold 2 (IC 50 = 12.6 µM) [14]. Herein, ether-linked triazolopyrazine scaffolds with CF 3 or CF 2 H moieties at the C-8 position displayed weak antimalarial activity, whereas incorporation of a CF 2 Me group completely abolished the effect at the tested concentrations, in comparison to the parent scaffolds 4-6. These data suggest that substituents at the C-5 position of the triazolopyrazine core appear to influence the antimalarial activity of C-8 fluoroalkyl-substituted compounds. Additional investigations on other OSM leads are warranted to further expand the SAR surrounding the 8-position of Series 4 triazolopyrazines with fluoroalkyl substituents or other functional groups. In addition, the replacement of H-8 by small electron-withdrawing groups appeared to be detrimental for activity in Series 4 compounds. Supporting Information Supporting Information File 1 General experimental procedures, NMR spectra and characterisation data for all new triazolopyrazine compounds and X-ray crystallography data for compounds 5, 6 and 18.
2,247
2023-01-31T00:00:00.000
[ "Chemistry", "Medicine" ]
Stabilization of transmittance fluctuations caused by beam wandering in continuous-variable quantum communication over free-space atmospheric channels Transmittance fluctuations in turbulent atmospheric channels result in quadrature excess noise which limits applicability of continuous-variable quantum communication. Such fluctuations are commonly caused by beam wandering around the receiving aperture. We study the possibility to stabilize the fluctuations by expanding the beam, and test this channel stabilization in regard of continuous-variable entanglement sharing and quantum key distribution. We perform transmittance measurements of a real free-space atmospheric channel for different beam widths and show that the beam expansion reduces the fluctuations of the channel transmittance by the cost of an increased overall loss. We also theoretically study the possibility to share an entangled state or to establish secure quantum key distribution over the turbulent atmospheric channels with varying beam widths. We show the positive effect of channel stabilization by beam expansion on continuous-variable quantum communication as well as the necessity to optimize the method in order to maximize the secret key rate or the amount of shared entanglement. Being autonomous and not requiring adaptive control of the source and detectors based on characterization of beam wandering, the method of beam expansion can be also combined with other methods aiming at stabilizing the fluctuating free-space atmospheric channels. Introduction The development of experimental quantum optics in the past decades led to the emergence and tremendous progress in the field of quantum information, which studies the possibility to store, transmit and process information encoded into quantum states. Quantum communication, a particular application of quantum information processing, is very naturally suggested by the long coherence time and relatively low coupling to the environment which is typical for optical quantum states. This allows one to use quantum states of light for quantum communication, particularly for sharing a quantum resource (such as entanglement) to connect quantum devices, or for quantum key distribution (QKD), aimed at securely distributing random secret keys between two legitimate parties. The methods of QKD are called protocols and were first suggested on the basis of strongly nonclassical systems such as single photons or entangled photon pairs [1]. Later the natural use of continuous-variable (CV) [2] quantum states of light was suggested [3]. This resulted in the development of CV QKD protocols and methods to produce, characterize and share CV entanglement. CV QKD protocols are typically based on the use of Gaussian quadrature-modulated coherent [4,5] or squeezed states [6,7] of light and homodyne detection at the receiving station. Equivalently, quadrature-entangled states and homodyne detection at both the sending and the receiving stations can be used [8]. The security of Gaussian CV QKD protocols [9] was shown against general attacks in the asymptotic regime [10] and against collective attacks in the finite-size regime [11,12] based on the optimality of Gaussian attacks [13][14][15]. This approach allows to broadly study the security of the protocols using covariance matrices, which explicitly characterize Gaussian states of light [16]. Gaussian CV QKD protocols were well studied and successfully implemented in long-distance fiber links [5,17,18], where the transmittance is typically stable and the added channel excess noise is extremely low. On the other hand, atmospheric quantum channels, which are of utmost importance for long-distance satellite communication [19] or free-space terrestrial communication waiving the requirement of necessity of fiber-optical infrastructure, are typically inclined to transmittance fluctuations due to turbulence effects [20][21][22], also affected by weather conditions [23]. Such transmittance fluctuations (also referred to as channel fading) were analyzed in their impact on applicability of CV quantum communication in the case of atmospheric turbulence [24][25][26][27] and uniform transmittance fluctuations [28]. It was shown that channel fading can be destructive to CV QKD protocols and limit the possibility to share CV entangled states. The main reason for this is that the transmittance fluctuations lead to additional excess noise appearing in the variances of the quadrature measurement results [24]. Such fading-related excess noise is proportional to the variance of the transmittance fluctuations and the overall variance of the quadrature distributions in the quantum signal. Therefore in order to allow CV QKD or quantum resource sharing over a fluctuating channel the stabilization of the channel transmittance can be advantageous as a feasible alternative to channel post selection [24] or entanglement distillation [29,30]. In the case of mid-range atmospheric optical channels, the transmittance fluctuations are typically caused by beam wandering, when the beam spot is randomly traveling around the receiving aperture [31], in addition to such turbulence effects, as, e.g., scintillation, phase degradation of the wave front, and beam spreading. The transmittance fluctuations caused by beam wandering are then governed, in particular, by the ratio between the beam size and the size of the aperture [32]. It was suggested that an increase of this ratio would naturally stabilize the channel and make it more suitable for quantum communication tasks [24], similarly to optimization of the beam spot size for given channel parameters in classical free-space optical communication [33,34]. In the present paper we discuss the method of beam expansion, aimed at compensating the channel fluctuations caused by beam wandering, in detail for CV quantum communication tasks, where the signal intensity is drastically limited compared to the classical free-space optical communication. We report the experimental test of the method based on the spatial expansion of the beam and the subsequent characterization of the channel transmittance. We show that the fading can be indeed stabilized and the variance of transmittance fluctuations (and, subsequently, quadrature excess noise) can be substantially reduced at the cost of increase of the overall loss of the channel. This leads to the trade-off between channel stabilization and its applicability to entanglement sharing or CV QKD. Therefore the suggested method of channel stabilization should be optimized to reach maximum key rate or secure distance for CV QKD or maximum shared entanglement in practical quantum applications. Fading due to beam wandering in CV quantum communication The most feasible CV quantum communication and QKD protocols are based on Gaussian states and operations [16]. It is well known that Gaussian states and their properties are explicitly described by the first and the second moments of the field quadrature operators, which can be introduced through the mode's quantum operators as x = a † + a and p = i(a † − a), i.e. by the mean values x , p and by the covariance matrix γ of the elements of the form γ i, j = r i r j − r i r j , where r i = {x i , p i } is the quadrature vector of the i-th mode. It was shown that channel fading leads to excess noise in the quadrature variance, which is proportional to the variance of the channel fluctuations and the variance of the state propagating through the channel [30] such that the variance of a quadrature on the output of a purely attenuating fading channel becomes is the excess noise due to fading, which depends on the variance of the transmittance fluctuations V ar( √ η) = η − √ η 2 and the r i -quadrature variance V r i of the source. Noise due to fading is therefore generally phase-sensitive, but we further, with no loss of generality, assume phase-space symmetry of the considered states, having variance V r i = V : ∀i in any quadrature, and subsequent phase independence of the noise f = V ar( √ η)(V − 1). This noise reduces and possibly destroys the entanglement of a Gaussian state shared over a fading channel and as well decreases the secret key rate of the Gaussian CV QKD. It can lead to loss of security in CV QKD [24], i.e., turning the key rate to zero. The effect is more pronounced for stronger transmittance fluctuations, lower mean channel transmittance and larger initial state variance V. One of the main causes of transmittance fluctuations in an atmospheric channel is beam wandering [31], when the optical beam moves around the aperture of the receiving detector and becomes clipped. It was studied for transmission of quantum states of light for which the transmittance distribution was shown to be governed by the log-negative Weibull distribution, cut at a certain value of transmittance η 0 [32,35]. The distribution is then given by the scale and shape parameters, expressed by the beam-center position variance σ 2 b and the ratio a/W of the aperture radius a and the beam-spot radius W so that the maximum transmittance is defined by The beam-spot fluctuations variance σ 2 b is related to the Rytov parameter [36], defining the turbulence strength, which can be obtained from the atmospheric structure constant of refractive index C n [37]. The latter, however, was not directly measured in our experiment and further we describe the beam-spot fluctuations by the variance σ 2 b . It was naturally predicted that the expansion of the beam i.e. the decrease of the ratio a/W would result in a stabilization of the channel transmittance at the cost of a decrease of the mean transmittance [24], a technique also used in classical optical communication [33,34]. In our research we verify and confirm this conjecture and study the effect of beam expansion on the channel properties, the efficiency of sharing quantum entanglement and on the security of CV QKD through a fading channel. Experimental set-up and results The possibility to stabilize the fading channel by expanding the beam was studied in a real-world scenario in the city of Erlangen. The used point-to-point free-space channel of 1.6 km length connects the building of the Max Planck Institute for the Science of Light with the building of the computer sciences of the Friedrich-Alexander-University Erlangen-Nürnberg. We use a grating stabilized continuous wave diode laser with a wavelength of λ = 809 nm. The mode of this laser is cleaned using a single mode fiber before the beam is expanded using a telescope (see Fig 1). The beam is then sent through the fading free-space channel to Bob. At Bob we use an achromatic lens with a diameter of a = 150 mm and a focal length of 800 mm, which defines our aperture. The beam width of the received beam, i.e. the aperture-to beam size ratio a/W, can be adjusted with the sender telescope. A PIN photodiode detector (bandwidth 150 kHz) is used to measure the fluctuating transmittance of the channel. To estimate the beam width at Bob we use a CCD camera and a screen. No adaptive strategy has been used at Bob's station or between Alice and Bob. The profiles of the transmittance distributions for different beam expansion settings that illustrate the change of the statistics of the channel fading are given in Fig. 2. The transmittance data was analyzed to obtain the mean values of transmittance η and √ η , and the resulting Fig. 1. A schematic view of the experimental set-up. At the sender Alice we use a telescope to expand the beam and adjust its beam width. Subsequently the beam is sent through our 1.6 km free-space link to the receiver Bob. There we use an achromatic lens with a diameter of a = 150 mm and measure the fluctuating transmission using a PIN photodiode detector and an analogue-to-digital converter. To estimate the aperture-to-beam size ratio we use a CCD camera and a screen. variance V ar( √ η), which governs the evolution of a covariance matrix after propagating through the fading channel. The results are given in Fig. 3 along with the values, obtained from the analytical Weibull distribution for the beam-spot fluctuation variance of σ 2 b = 0.3, which is set so in all the subsequent calculations except for these, resulting in the plots in Fig. 5. The results of calculations from the experimentally obtained data demonstrate qualitatively the same tendencies with the decrease of the aperture-to-beam size ratio as the theoretical prediction: it is clearly visible from the plots that the expansion of the beam (i.e., decrease of the aperture-to-beam size ratio) reduces the fluctuations of the transmittance and at the same time reduces the average transmittance of the channel. In order to clarify the effect of the channel stabilization by the beam expansion on the quantum communication and quantum resource sharing we apply the obtained characteristics of the channel to these applications in the next section. Effect of beam expansion on entangled resource sharing and CV QKD Before we analyze the applicability of channel stabilization by beam expansion for CV QKD, we first study the impact of the method on the entanglement of a typical two-mode Gaussian entangled state, namely two-mode squeezed vacuum [16], shared over a fading channel. We characterize the entanglement of the state using the logarithmic negativity [38], defined as where ν is the smallest symplectic eigenvalue of a covariance matrix of a partially transposed state for a pair of modes (see [16] for review on covariance matrix formalism for Gaussian states). We evaluate the logarithmic negativity for a state quadrature variance of V = 7 shot-noise units (SNU, being the variance of the vacuum fluctuations), corresponding to approximately -8 dB of conditionally prepared quadrature squeezing after a homodyne detection on one of the beams, which is feasible with current technology [39] and is close to optimum for the given protocol parameters, in the presence of 1% SNU of excess noise (here and further the fixed channel excess noise is related to the channel input). The results of the calculations are given in Fig. 4 (left) obtained from the experimental data and from the analytical fading distribution. It is clear from the graphs that the channel for the non-expanded beam was more suitable for entanglement distribution and that the beam expansion degraded the entanglement due to increase of the overall loss. The reason for such behavior is that in the considered region of parameters Gaussian entanglement is more sensitive to the channel transmittance than to the small amount of excess noise caused by fading. The transmittance fluctuations in the studied channels were relatively low and did not introduce significant noise, which would reduce the Gaussian entanglement of the states, while decreasing the average transmittance due to beam expansion resulting in entanglement degradation. We also analyze the effect of the beam expansion on the typical CV QKD protocol with coherent states of light and homodyne detection by theoretically estimating the lower bound on the key rate secure against collective attacks [40] in a given channel, which, in the reverse reconciliation scenario (being robust against channel attenuation below -3 dB [5]), is given by where I AB is the classical (Shannon) mutual information between the trusted parties, χ BE is the Holevo bound on an information on the shared key received by the remote party, which is available to an eavesdropper, β ∈ (0, 1) is the post-processing efficiency, which characterizes how close the trusted parties are able to reach the mutual information I AB . In our analysis we follow the purification-based method (see [41] for the details of security analysis) to calculate the Holevo bound [42] and take into account the realistic post-processing efficiency of 97% [43]. The results of the calculations are given in Fig. 4 (right) and clearly show the improvement of the key rate due to the stabilization of the fading channel with a small beam expansion upon fixed modulation, which, however, becomes disadvantageous upon the further increase of the beam spot. We therefore confirm the positive effect of the beam expansion in the fading channel on the CV QKD, which, however, can be optimized in the particular conditions. For Gaussian entanglement distribution or for the coherent-state CV QKD protocol with optimized modulation the method would have been useful for a stronger channel turbulence. The positive role of fading stabilization for the optimized CV QKD upon stronger turbulence is theoretically predicted in Fig. 5, where the lower bound on the key rate is plotted versus the beam expansion settings at different values of beam-spot fluctuations. It is evident from the plot, that the experimentally tested beam expansion settings would have been advantageous for the optimized protocol at σ 2 b = 0.4 (note that in our previous study of the same channel upon stronger turbulence the beam-spot fluctuations variance was estimated as σ 2 b = 0.36 [24]). Despite evident differences in the effect of beam expansion on the considered quantities (namely logarithmic negativity and key rate) as shown in Fig. 4, we theoretically observe a similar behavior of logarithmic negativity at higher values of a/W ratio (out of the experimentally tested and plotted region). Indeed, the logarithmic negativity also has a local maximum at certain ratio a/W (depending on the variance V), similarly to the key rate, and would decrease for higher values of the ratio. The difference is however that the key rate is more sensitive to channel fluctuations due to beam wandering and we therefore observed an improvement of the channel parameters by means of beam expansion for the application of CV QKD. Moreover, entanglement is not vanishing completely at high a/W for moderate initial entanglement corresponding to a variance of V < 15. To complete our study, we numerically illustrate the behavior of the logarithmic negativity and the lower bound on the secure key rate with respect to ratio a/W in the given channel for different initial resources in Fig. 6. We characterize the initial resource by the state variance V for the key rate plot or, equivalently, by the initial entanglement of the shared state, which reads LN for the logarithmic negativity plot, to verify how much of the initial entanglement survives in a fading channel. It is evident from the plots that beam expansion in the considered channel can have positive effect on the key rate practically for any modulation and on the entanglement once the initial entanglement and beam-to-aperture ratio are large. In our study we considered the most feasible coherent-state Fig. 6. Entanglement and secure key rate versus aperture-to-beam size ratio a/W at different initial resources. (Left): Logarithmic negativity (LN) of an entangled state shared over a fading channel versus aperture-to-beam size ratio and initial logarithmic negativity LN 0 and (Right) Lower bound on the key rate (KR) secure against collective attacks in the fading channel versus aperture-to-beam size ratio and modulated state variance V. Channel excess noise is 1% SNU, post-processing efficiency for the Gaussian CV QKD is 97%. CV QKD protocol. While squeezed-state protocol is known to be typically more robust against channel transmittance fluctuations, its performance is still degraded by fading, related to beam wandering [44], so the beam expansion technique can be useful for the squeezed-state protocols as well and should be optimized in the given conditions. Conclusions We studied the possibility to stabilize a real fading channel by expanding the beam in order to suppress the transmittance fluctuations concerned with the beam wandering in turbulent atmosphere. We experimentally characterized the change of statistics of the channel transmittance fluctuations and showed that they qualitatively correspond to the theoretical predictions given by the Weibull distribution. We proved the positive effect of the channel stabilization by beam expansion on the distribution of a nonclassical resource (entanglement) and on Gaussian continuous-variable quantum key distribution. We have shown that for the channel used for the experimental results presented here beam expansion could become disadvantageous for Gaussian entanglement of the distributed state, described by the logarithmic negativity, due to weak atmospheric turbulence. On the other hand, channel stabilization by beam expansion can improve the secret key rate of the coherent-state protocol. The improvement requires an optimization of the beam width setting under given conditions. Importantly, the method does not require any adaptive control of the source and detector based on monitoring of the beam wandering. It can be combined with other known methods for fading channel stabilization such as fast steering [45] or concave mirrors [46], channel diversity [47,48], multiple wavelengths [49], adaptive optics and active tracking systems [50][51][52][53][54]. It should be emphasized that our technique requires a link which allows for a certain margin in the loss tolerance. Especially for satellite links the loss is usually already very high as the aperture-to-beam size is very low, such that our stabilization technique will hardly have any benefit. But our proposed technique can be beneficial in mid-range terrestrial free-space links, what would be the field of application for this stabilization technique. Our result therefore demonstrates a promising and feasible method to stabilize free-space atmospheric channels for the tasks of continuous-variable quantum key distribution and quantum communication, which is best applicable in low or medium loss regime. Future steps will include full implementation of continuous-variable quantum key distribution and entanglement sharing over free-space atmospheric channels aided by channel stabilization methods.
4,818.8
2018-11-12T00:00:00.000
[ "Physics" ]
On tree amplitudes of supersymmetric Einstein-Yang-Mills theory We present a new formula for all single trace tree amplitudes in four dimensional super Yang-Mills coupled to Einstein supergravity. Like the Cachazo-He-Yuan formula, our expression is supported on solutions of the scattering equations, but with momenta written in terms of spinor helicity variables. Supersymmetry and parity are both manifest. In the pure gravity and pure Yang-Mills sectors, it reduces to the known twistor-string formulae. We show that the formula behaves correctly under factorization and sketch how these amplitudes may be obtained from a four-dimensional (ambi)twistor string. Introduction In any space-time dimension, the scattering equations underpin the classical S-matrix of a wide variety of massless field theories by constraining external kinematic data in terms of marked points on an auxiliary Riemann sphere, Σ [1]. Geometrically, these constraints can be compactly summarized as the requirement that a meromorphic quadratic differential P 2 (z) vanishes globally on Σ [2]. Recently, the scattering equations have received considerable attention for their central role in the Cachazo-He-Yuan (CHY) expressions for the tree-level S-matrices of various massless bosonic field theories [3,4], although they were first discovered in the context of high-energy string scattering [5][6][7][8]. Even before the advent of the CHY formulae, the importance of the scattering equations was realized for field theory in four space-time dimensions [9]. In four dimensions, the spinor helicity formalism allows us to solve the on-shell condition for momentum by writing it as the product of two Weyl spinors. In addition, the simplicity of on-shell superspace makes it possible to account for arbitrary helicity external states in super-Yang-Mills theory and supergravity. Given the existence of such expressions for the pure gravity and gauge theory sectors [10,11], it is natural to ask if there is a generalization to the tree-level S-matrix of supergravity coupled to super-Yang-Mills theory (sEYM) in four dimensions (we consider the case where the spin one particles of the gravity multiplet remain ungauged [12]). In this paper, we propose such a formula for the coupling of a single trace of gluons to gravity. It is compact, written in terms of integrals over the moduli space of a punctured Riemann sphere Σ, and supported on solutions to the scattering equations. Furthermore, it easily incorporates supersymmetry thanks to the simplicity of four-dimensional on-shell superspace, is manifestly parity invariant, produces the three-point amplitudes of the EYM action, and factorizes appropriately. The new formula suggests a remarkable structure in four-dimensions that would be completely obscured by a naïve replacement k i · k j → [ij] ij in the Pfaffians of the CHY formula. Remarkably, as in [11] the [ , ] and , factors completely decouple, appearing JHEP12(2015)177 in separate determinants. The previously known expressions for amplitudes in pure gauge theory or supergravity can be derived from the sphere correlation functions of certain worldsheet theories [13][14][15][16] (collectively referred to as 'twistor string theories'), and our formula also appears to have a worldsheet origin. The formula Before describing our formula, we first point out several ab initio constraints which any purported formula for sEYM must obey. The first of these is rather obvious: in the pure gauge theory or supergravity sectors, it must reduce to known expressions for these treelevel S-matrices. Such expressions are provided by the Roiban-Spradlin-Volovich-Witten (RSVW) formula for super-Yang-Mills theory [9,10], and the Cachazo-Skinner (CS) formula for supergravity [11]. A second constraint comes by considering the gravitational coupling constant κ ∼ √ G N . A tree-level amplitude in EYM with n external gravitons and τ colour traces must be proportional to κ n+2τ −2 . In particular, in the purely gravitational sector this is the usual κ n−2 factor associated with (the Euler character of) a gravitational tree graph, while if there are no external gravitons and only a single colour trace, then we find κ 0 as expected for the conformally invariant Yang-Mills answer. In a connected tree, different colour traces interact by exchanging gravitons, with a gravitational coupling at each end. In [11] it was explained that, when written in terms of a worldsheet model, these powers of κ must be balanced by the same number of powers of [ , ] or , brackets (interpreted as infinity twistors, representing the breaking of conformal invariance). Parity transformations exchange [ , ] and , , and so we learn that in four-dimensional EYM with n ± external gravitons of helicity ±2 and τ colour traces, we should obtain on the worldsheet. A final constraint is given by a rather curious observation about the tree amplitudes of sEYM theory in four dimensions: every colour trace of external gluons must contain at least one gluon of each helicity. 1 In particular, amplitudes with all negative helicity gluons and arbitrarily many positive helicity gravitons must vanish, despite the fact that this is far from obvious by standard MHV counting. To prove this, first note that each three-point interaction of the sEYM Lagrangian must contain at least one gluon of each helicity, or no gluons at all. This clear for the pure gauge theory interactions, and all three-point interactions with two gluons and one graviton arise from the term in the Einstein-Yang-Mills action. Spinor helicity variables make it easy to see that any three-point function coming from this term with two gluons of the same helicity vanishes. JHEP12(2015)177 Now, consider any tree diagram in sEYM, and select all of the gluons in the diagram belonging to a particular colour trace. This identifies a unique tree sub-graph associated with the chosen trace; we may assume that all its internal edges are gluon propagators, since a graviton propagator would lead to a double trace or an external graviton contribution which can be amputated. The total number of gluons (both internal and external) of each helicity appearing in this sub-graph is equal to the number of internal edges (helicitylabelled gluon propagators) plus the number of external gluons of that helicity. The Euler characteristic of the tree sub-graph, combined with the fact that the number of gluons of each helicity at every vertex is positive, then implies the claim: the number of external gluons of each helicity in the trace must be strictly positive. We now turn to the expression for the tree amplitudes, beginning with the nonsupersymmetric case. In this paper, we will only consider the case with τ ≤ 1. External particles are specified by two spinors, λ i α andλ iα for particle i, as well as a helicity label. We divide the external particles of each amplitude into sets of gluons and gravitons of positive or negative helicity; positive (negative) helicity gravitons form the set h (h), and positive (negative) helicity gluons form the set g (g). Let us first present the formula, and then explain its various ingredients. The tree-level amplitude with a single colour trace is given by an integral where h denotes the helicity of the given particle (i.e., h = ±1, 2). The {z i , z k } label marked points on an abstract Riemann sphere Σ ∼ = CP 1 in an inhomogeneous coordinate, z, while the complex scaling parameters {t i ,t k } carry conformal weight, taking values in T 1/2 Σ i,k . The expressions λ(z i ),λ(z k ) appearing in (2.2) are given by is the Szegő kernel at genus zero (free fermion propagator). Hence, the parameters t i ,t k carry opposite weight with respect to little group scalings. The three main ingredients in (2.2) are the insertions det ′ Φ, det ′Φ , and PT. Φ is a (|h| + 1) × (|h| + 1) symmetric matrix with rows and columns corresponding to each positive helicity graviton and colour trace in the amplitude. Its entries are given by: JHEP12(2015)177 where i, j ∈ h, and g labels the row/column for the gluon trace. It is easy to see that Φ has co-rank one, with kernel spanned by the vector (1, 1, . . . , 1), so its determinant vanishes. The operation det ′ corresponds to removing any choice of row and column from a matrix and then taking its determinant. det ′ Φ is easily seen to be independent of the choice of row and column removed, and is a canonically-defined non-vanishing object. Φ is the parity conjugate of Φ. It is a (|h| + 1) × (|h| + 1) matrix, with entries where k, l ∈h. This matrix also has co-rank one with kernel spanned by (1, 1, . . . , 1). Finally, PT denotes a 'generalized' Parke-Taylor factor corresponding to the colour trace. If the gluon trace contains n total gluons (of all helicities), then this Parke-Taylor factor is where T a are the generators of the gauge group. Although M g,g h,h takes the form of an integral expression, the delta functions in (2.2) saturate all of these integrals in addition to providing overall momentum conservation. So in reality, all integrals in (2.2) are performed algebraically against delta functions. These delta functions are a refinement of the usual scattering equations; they imply k i · P (z i ) = 0 for P αα (z) = λ α (z)λα(z) [9]. Note that in the special cases where h =h = ∅ or g =g = ∅ it is equivalent to the RSVW or CS formulae respectively, presented in the guise of [16]. The reduced determinants and definitions (2.5) and (2.6) ensure that M g,g h,h is consistent with the counting of (2.1) and that the colour trace contains at least one gluon of each helicity. Supersymmetry can be incorporated straightforwardly, unlike in the CHY formulae, due to the simplicity of on-shell superspace in four dimensions. Extending EYM to N ≤ 3 supersymmetry, 2 on-shell scattering states are specified by the usual two Weyl spinors λ α ,λα as well as Grassmann parameters for the supermomentum, η A orη A , for A = 1, . . . , N . Remarkably, our formula accommodates this supersymmetry with the inclusion of a single exponential factor: We believe that the formula is also correct for N = 4, provided one chooses an appropriate representation for the external gluons. JHEP12(2015)177 As usual, amplitudes for individual helicity components of the supermultiplets are read off by expanding the exponential and extracting those terms of appropriate degree in the Grassmann variables. Note that both (2.2) and (2.9) are manifestly parity symmetric. Justification In this section, we show that (2.9) factorizes appropriately and produces the correct threepoint amplitudes. Three-point amplitudes In the helicity-based framework, all three-point amplitudes of the EYM Lagrangian are classified as MHV or MHV, depending on whether they have one or two positive helicity legs, respectively. Since the formula (2.2) is parity-symmetric, we only check the MHV three-point amplitudes explicitly. For Yang-Mills gauge group SU(N ) there are only three such potential amplitudes: with overall momentum conservation and any colour trace stripped off. Note that one can also write down an expression with the homogeneity of ag 1g2 h 3 amplitude: but this does not occur in EYM (as follows from dimensional analysis). Since (2.2) reduces to known expressions in the all-gluon or all-graviton sectors, it is obvious that it reproduces the first two amplitudes in (3.1). So the only non-trivial calculation is to make sure that (2.2) produces the third amplitude in (3.1). In this helicity configuration, our formula becomes where the positions of z 1 , z 2 , z 3 have been fixed with the SL(2, C) freedom, the C * freedom has been used to set t 1 = 1, andt 2,3 have been rescaled. The final two integrations can be done against the delta functions in a straightforward manner, leaving as required. Factorization For the (non-supersymmetric) expression (2.2) to factorize correctly, we must show that in the limit where a subset L of the external momenta go on-shell where the sum is over possible helicities flowing through the factorization channel, the integral is over the on-shell phase space of the intermediate state, and R is the compliment of L. In many respects, this calculation follows similar lines to those for the RSVW and CS formulae [18,19]. In terms of the Riemann surface Σ underlying (2.2), the factorization limit should correspond to a degeneration of Σ into two Riemann spheres Σ L and Σ R joined at a node. Locally, we can model this degeneration in terms of inhomogeneous coordinates by where in the q → 0 limit the node is located at z a ∈ Σ L and z b ∈ Σ R . 3 One advantage of this local model is that the behaviour of the propagator (2.4) is particularly simple near the degenerate limit: This follows from the universal behaviour of Szegő kernels on degenerate Riemann surfaces [20]. Since the scaling parameters t i ,t k carry conformal weight, their behaviour in the degeneration limit is non-trivial. The local model (3.3) dictates that a section of T 1/2 Σ will scale as q ±1/4 in the q → 0 limit, depending on which side of the degeneration the section is located. 4 The choice of which of Σ L or Σ R is associated with the '+' scaling must be summed over; below we see that this choice is associated with the helicity (positive or negative) of an intermediate particle flowing through the factorization channel. Homogeneity of the measure on the moduli space combined with little group scaling requires that the t i andt k parameters scale oppositely in the degeneration limit. The behaviour of the scaling parameters is thus given by: 5) 3 To be precise, the local model should read (zL − za)(zR − z b ) = q, where zL, zR are appropriately chosen coordinates on ΣL, ΣR. We keep this choice of local coordinates implicit in what follows to streamline notation. 4 In actuality, (3.3) only restricts a section of T 1/2 Σ to scale as q ±α/4 on ΣL and q ∓(2−α)/4 on ΣR. We consider the symmetric case α = 1 for simplicity only. with the choice of upper or lower sign to be summed over. It is natural to work in a formalism where the only objects carrying conformal weight on Σ are these parameters and the Szegő kernels. This is accomplished by defining parameters t * ,t * (valued in T in the amplitude (2.2). This allows us to re-write the behaviour of the propagator (2.4) in the attractive form: A priori, the t * ,t * are just convenient dummy variables, but they will eventually become associated with an intermediate on-shell particle flowing through the degeneration. It is also convenient to take a factor of t −2 i ort −2 k from each graviton wave function and incorporate it into det ′ Φ or det ′Φ , respectively. Additionally, we can choose a single gluon of each helicity, say r ∈ g, s ∈g, and divide the trace row and column in each of Φ, Φ by the associated scaling parameter, t r ,t s . The result of these (trivial) manipulations is a transformation inside (2.2). Here, we abuse notation by writing Φ andΦ for the rescaled matrices with entries and likewise forΦ. These rescaled matrices have the (important) virtue of behaving nicely in the factorization limit. The definition of the reduced determinants is adapted appropriately for this rescaling: where |Φ a b | denotes the determinant of Φ with row a and column b removed, etc. Note that the formula does not depend on which representative gluons we choose for rescaling the matrices. We always have at least one gluon of each helicity and the JHEP12(2015)177 fundamental properties of the reduced determinant make it obvious that the final answer does not depend on the choice we make. Additionally, the rescaling of the matrix entries is reflected in a rescaling of the null vector; for instance, the kernel of Φ is now spanned by (t 1 , . . . , t |h| , t r ), which scales according to (3.5) in the degeneration limit. 5 Clearly, there are two different ways in which the formula (2.2) can factorize as q → 0: the degeneration may or may not split the colour trace. We will show that in the former case the intermediate state corresponds to a gluon, while in the latter it will be a graviton. Let us begin by considering the case which does not disturb the colour structure. The behaviour of (2.2) in the q → 0 limit can be easily expressed by arranging the matrices Φ andΦ in a judicious manner. Arrange the entries of both matrices into blocks, so that the upper-left block corresponds to entries with both indices on Σ L and the bottom-right block corresponds to entries with both indices on Σ R : Without loss of generality, we assume all gluons to remain on Σ L . In the q → 0 limit, (3.7) ensures that the off-diagonal blocks of the rescaled matrices vanish at O( √ q), so we must focus on what is happening in the diagonal blocks. Consider Φ; clearly the entries (Φ LL ) ij and (Φ RR ) ij are unchanged on Σ L and Σ R , respectively, as q → 0. However, the diagonal entries behave as where we choose the upper sign in (3.5) for concreteness. As the worldsheet factorizes, the particles h R on Σ R form an effective particle insertion on Σ L which appears on the diagonal entries of Φ LL , while all entries of Φ RR only encode the particles of h R . The interpretation is clearly that the node becomes the insertion point of a new particle with positive/negative helicity on Σ L/R . (The opposite configuration follows by taking the lower sign in (3.5), and the final result contains the sum over both choices.) To make this manifest, introduce a new spinor-helicity variableλ * by inserting into the amplitude (2.2) close to the degeneration, with JHEP12(2015)177 On the support of this delta function, the diagonal entries of Φ can be rewritten as where h * L = h L ∪ { * }. This is precisely the form required for the expected channel. With our choices corresponding to a positive helicity intermediate state on Σ L , it is natural to remove a row and column corresponding to a graviton on Σ R when computing the original det ′ Φ. As q → 0, one finds where Φ L is the (|h * L | + 1) × (|h * L | + 1) matrix appropriate for Σ L with the positive helicity intermediate state * at z a ∈ Σ L , and Φ R is the |h R | × |h R | matrix appropriate for Σ R . The story forΦ proceeds in a similar fashion. By introducing the diagonal entries ofΦ LL andΦ RR become With this corresponding to a negative helicity intermediate state on Σ R , we take det ′Φ by eliminating a row and column for a graviton on Σ L , to obtain the desired factorization: This shows that the reduced determinants factorize correctly and yield a factor of q t 2 * t 2 * , while also introducing the wave functions for an intermediate on-shell state. Next we examine the behaviour of the external wave functions. On the support of (3.10), (3.13), the arguments of the delta functions for the external states involve JHEP12(2015)177 in the degenerate limit q → 0. As q → 0 a scaling argument based on the local model (3.3) dictates how the measure on the moduli space of Σ factorizes into measures on the moduli spaces of Σ L and Σ R (cf. [21,22]). Combining all of our observations up to this point, the formula (2.2) looks like The delta functions for λ * andλ * are naturally incorporated into products over i∈h * L ∪g and l∈h * R as gravitons. Finally, trading the delta function of (3.6) for an additional (vol C * ) −1 , we are left with where and similarly for M − R . All that remains to show is that extracting the residue at q = 0 corresponds to setting the momentum flowing from Σ L to Σ R on-shell. To this end, notice that the various delta functions in (3.19) imply that In addition to enforcing momentum conservation on each side of the cut, this reveals the intermediate particle as manifestly on-shell. Thus, taking the residue at q = 0 results in the h = + graviton exchange term in the factorization expression (3.2). The h = − term is obtained in a similar fashion, by choosing the other sign in (3.5). The second possible degeneration, which disturbs the colour structure of the amplitude via a gluon exchange, follows in much the same way, so we will be more brief in its description. In this case the external gluons are split between Σ L and Σ R in the q → 0 limit. The behaviour of the matrices Φ andΦ under the degeneration is the same as before; it is now convenient to eliminate the row and column corresponding to the trace in both det ′ Φ and det ′Φ . JHEP12(2015)177 Once again, after inserting the delta functions (3.10), (3.13), the reduced determinants factorize as (3.12), (3.15) with the intermediate particle being incorporated into g * L = g L ∪ { * } on Σ L andg * R =g R ∪ { * } on Σ R . However, the factor of q t 2 * t 2 * is entirely absorbed by the scaling of the gluon representatives t 2 r ,t 2 s appearing in (3.8). From (3.7) it is easy to see that the Parke-Taylor factor behaves as where PT γ a L denotes the Parke-Taylor factor for those gluons located on Σ L with the point z a inserted in the cyclic ordering precisely where the original colour trace is broken by the degeneration. The measures for t * ,t * appearing in (3.6) are automatically appropriate for gluons, so we obtain the correct products over delta functions for factorization. Taking the residue of the resulting dq/q measure corresponds to the h = + gluon exchange term in the factorization expression (3.2). Factorization for the supersymmetric amplitude (2.9) is established by looking at how the exponential encoding supersymmetry behaves in the q → 0 limit. For concreteness we pick the case of (3.5) where t i ∼ q 1/4 on Σ L and the other case follows analogously, as usual. To begin notice that the exponent splits as (3.21) The first two terms are clearly appropriate for Σ L and Σ R , respectively, while the third term accounts for the new intermediate particle. In fact, a simple calculation reveals that the exponential of this third term can be written exp     i∈h R ∪g R k∈h L ∪g L t itk t * t * ηi ·η k S(k, a)S(b, i)     = d N η * d Nη * e η * ·η * exp   k∈h L ∪g L t * tkη * ·η k S(k, a) + i∈h R ∪g R t it * ηi ·η * S(b, i)   , on the support of the same delta functions used in the purely bosonic calculation. In summary, the exponential encoding supersymmetry factorizes, with the arguments on Σ L , Σ R becoming i∈h * L ∪g L k∈h L ∪g L t itkηi · η k S(k, i) and respectively (without altering the simple pole in q). Combined with our previous arguments, this leads to the appropriate on-shell superspace measure dq q d 2|N λ * d 2|Nλ * vol C * e η * ·η * M + L M − R + O(q) . JHEP12(2015)177 4 Conclusions We have presented a new formula for all single trace tree amplitudes in supersymmetric Einstein Yang-Mills theory in four dimensions, written in terms of on-shell superspace. The formula was shown to reproduce the correct three-point amplitudes and to factorize appropriately in both gravitational and coloured channels. The considerations at the beginning of section 2 also place strong constraints on the form of general multitrace EYM amplitudes. It would be interesting to investigate these further. In the purely gravitational sector, the formula reduces to the representation of amplitudes given in [11,16]. These formulae are known to be the output of a twistor-string theory for maximal supergravity in d = 4 [15] and it is natural to wonder whether there is a modification of this theory that describes sEYM. In this regard, we note that the coupling of the gluons to the gravitons in Φ,Φ, together with the Parke-Taylor factors, may be generated by inserting operators tr D −1 δ 2 (γ) AD −1 δ 2 (γ)à , and tr D −1 O AD −1Õà . HereD =∂ + A(Z) +Ã(W ) and where A andà are gluon wavefunctions. It seems likely that the RSVW formula for tree amplitudes in sYM are best interpreted as the single trace sector of a twistor string for sEYM. Such a theory would also enable the use of worldsheet factorization arguments to streamline the calculations above [23].
5,912.2
2015-12-01T00:00:00.000
[ "Physics" ]
Highly Sensitive Plasmonic Structures Utilizing a Silicon Dioxide Overlayer In this paper, simple and highly sensitive plasmonic structures are analyzed theoretically and experimentally. A structure comprising a glass substrate with a gold layer, two adhesion layers of chromium, and a silicon dioxide overlayer is employed in liquid analyte sensing. The sensing properties of two structures with distinct protective layer thicknesses are derived based on a wavelength interrogation method. Spectral reflectance responses in the Kretschmann configuration with a coupling BK7 prism are presented, using the thicknesses of individual layers obtained by a method of spectral ellipsometry. In the measured spectral reflectance, a pronounced dip is resolved, which is strongly red-shifted as the refractive index (RI) of the analyte increases. Consequently, a sensitivity of 15,785 nm per RI unit (RIU) and a figure of merit (FOM) of 37.9 RIU−1 are reached for the silicon dioxide overlayer thickness of 147.5 nm. These results are in agreement with the theoretical ones, confirming that both the sensitivity and FOM can be enhanced using a thicker silicon dioxide overlayer. The designed structures prove to be advantageous as their durable design ensures the repeatability of measurement and extends their employment compared to regularly used structures for aqueous analyte sensing. Introduction The surface plasmon resonance (SPR) phenomenon has a wide range of applications in many branches of natural sciences for the purpose of sensing [1][2][3][4]. The SPR sensors utilize various techniques, such as intensity [5] or phase [6] detection based on either wavelength [7][8][9][10] or angular [8,11] interrogation methods. Amongst the most commonly used prism-based SPR excitation techniques is utilizing the attenuated total reflection (ATR) method in the Kretschmann configuration [1,[6][7][8][9]12]. Due to high sensitivity to the refractive index (RI) of the surrounding environment, SPR is mainly exploited for RI sensing; however, the pivotal factors that determine the performance of the sensor are the structural and optical parameters of the employed SPR structure. Various structures that utilize a dielectric overlayer, such as a thin silicon dioxide film [13] or a tungsten disulfide nanosheet overlayer [14], have been designed to enhance the sensitivity or to increase the detection accuracy using a graphene-MoS 2 hybrid structure [15]. The broad effect of coating the plasmonic material with a dielectric layer on sensing properties has been thoroughly studied for distinct materials and thicknesses, including a thin dielectric layer [16], a thin layer of silicon [17] or a semiconductor-metal-dielectric heterojunction system [18]. Moreover, the addition of a dielectric overlayer, such as silicon dioxide, in a simple plasmonic structure [13,19] or in complex structures [20,21], can lead to the excitation of guided modes that can greatly improve the performance of the sensor. In some instances, extremely sensitive plasmonic platforms based on metamaterials have been proposed and realized [22][23][24][25][26]. A porous nanorod layer has been utilized, where the achieved RI sensitivity attained a value of more than 30,000 nm/RIU [22]. Amongst other top performers, a miniaturized plasmonic biosensing platform based on a hyperbolic metamaterial using a grating-coupling technique has been reported [24], achieving RI sensitivity up to 30,000 nm/RIU and a figure of merit (FOM) of 590 RIU −1 . Furthermore, a gold nanorod hyperbolic metamaterial-based sensor utilizing a prism coupling technique that could reach a sensitivity of 41,600 nm/RIU and FOM of 416 RIU −1 was reported in [26]. Moreover, using nanoporous gold films, RI sensitivity over 15,000 in the near-infrared range (NIR) has been reported in [25]. Generally, the design of the proposed structure proves to be advantageous as it is very easy to change the analyte. The protective layer ensures the repeatability of measurement and the durability of the structure. Since gold is one of the most commonly used materials to form the plasmonic layer in the SPR structures, it is inevitable for its surface to become quickly damaged when it is in direct contact with an aqueous analyte. It is more fragile to clean, and the optical response changes after a few measurements when aqueous analytes are frequently studied. In this paper, we present highly sensitive plasmonic structures consisting of a BK7 glass substrate, a gold layer, two adhesion layers of chromium, and a protective layer of silicon dioxide. This is an improved structure in which a polymer layer [10] is substituted by a protective dielectric layer. Proposed structures are employed in the Kretschmann configuration with a coupling BK7 prism for aqueous analyte sensing. A method of spectral ellipsometry is utilized to determine the thicknesses of individual layers in the structure, including the silicon dioxide layer thicknesses of 147.5 nm and 270.9 nm. Using the obtained thickness parameters, the reflectance spectra for aqueous solutions of NaCl with distinct concentrations are calculated, and the sensing properties of the structures are derived. The first structure, with a thinner SiO 2 protective layer, showed sensitivity to RI of the analyte up to 7130 nm/RIU, with FOM attaining a value of 25.7 RIU −1 , and the second structure, with a thicker SiO 2 protective layer, showed sensitivity to RI up to 7660 nm/RIU, with FOM attaining a value of 38.0 RIU −1 . To confirm the theoretically derived values, measurements of the spectral reflectance ratio were performed for both structures. The following sensitivities to RI were achieved: for the first structure, up to 8140 nm/RIU, and for the second structure, up to 8690 nm/RIU. Furthermore, another measurement was performed for a smaller angle of incidence for the first structure with the silicon dioxide layer thickness of 147.5 nm, and RI sensitivity up to 15,785 nm/RIU was achieved, with FOM attaining a value of 37.9 RIU −1 . Structure Design The structure under study consists of a BK7 glass substrate, adhesion layers of chromium, a gold layer, and a protective layer of silicon dioxide. A schematic drawing of the structure is shown in Figure 1. Figure 1. Schematic drawing of a multilayer structure. Transfer Matrix Method To express the optical response of the multilayer structures, the transfer matrix method (TMM) was utilized. We can describe the propagation of electromagnetic waves through a system of thin layers with the use of transmission and propagation matrices for each layer [27]: where D ij is a transmission matrix at the ij-th interface (for i = j − 1) and P j is a propagation matrix through the j-th layer: where t j is a layer thickness and k j is a wave-vector component perpendicular to the ij-th interface, and, for a prism of refractive index n p (λ) and an angle of incidence θ of the incident light, it can be expressed as: The total transfer matrix M is then obtained by linking together the transmission and propagation matrices across the entire structure: where the reflection coefficient for pand s-polarized light can be expressed as r s,p = M 21 M 11 . Material Parameters In this section, dispersion formulas used to model the optical response of the multilayer structures are listed with their corresponding parameters. Gold Layer The dispersion of the gold layer can be described by the complex dielectric function given by the Drude-Lorentz model with the parameters listed in Table 1 [30]: Adhesion Chromium Layers The dispersion of the adhesion chromium layers can be described by the complex dielectric function given by the Drude Critical Point (CP) model with the parameters listed in Table 2 [31,32]: Theoretical Analysis The optical properties and structural parameters of the proposed SPR structure are pivotal factors that determine the sensing properties of the structure. It is crucial to consider the thicknesses of individual layers and their effects on the properties of a dip in the reflectance spectrum that can be resolved by a spectrometer operating in a NIR spectral range of 1000-1900 nm. Firstly, the effect of the thickness of the gold layer t Au is considered. The spectral reflectance of p-polarized light R p (λ) for distinct thicknesses of the gold layer is shown in Figure 2a,b for the structure with silicon dioxide thicknesses t SiO 2 = 150 nm and t SiO 2 = 300 nm, respectively, when the analyte is water. It can be observed that the resonance dip shifts towards longer wavelengths as the gold layer thickness decreases, and it is narrower for the silicon dioxide thickness of 300 nm. Moreover, the dip is too wide for the gold layer thickness in the range of 20-30 nm. By increasing the gold layer thickness, the dip becomes more narrow and more shallow. Therefore, the optimal gold layer thickness that serves as a middle ground between the depth and width of the dip was resolved as t Au = 38 nm. To investigate the influence of the silicon dioxide thickness t SiO 2 on the properties of the resolved dip, the reflectance spectra R p (λ) were calculated for the silicon dioxide thickness range of 100-300 nm. The calculated spectra are shown in Figure 3a, and it is apparent that the dip width decreases and the position of the dip shifts towards longer wavelengths with increasing thickness of the silicon dioxide. Furthermore, the thickness of the adhesion layer of chromium in the contact with the silicon dioxide also quite considerably affects the width and depth of the resolved dip. The spectral reflectances R p (λ) for distinct thicknesses of the chromium adhesion layer are shown in Figure 3b for the silicon dioxide thickness t SiO 2 = 150 nm. On increasing the thickness t Cr , surface plasmons become more damped, and the dip shows decreasing depth and increasing width. The optimal thickness of the chromium adhesion layer is as small as possible. However, the absence of the adhesion layer, as we confirmed experimentally, significantly impacts the longevity of the structure. Thus, we have chosen t Cr = 2 nm. To investigate the sensing properties of the structures, it is important to determine the RI sensitivity, which is defined as follows: where δλ r is a change in the resonant wavelength, which corresponds to the change in the position of the dip related to a change in the refractive index of the analyte δn. Furthermore, the performance of the SPR structures can be also evaluated in terms of the figure of merit, which is defined as the sensitivity S n divided by the full-width half-maximum (FWHM) of the dip. Taking into account the depth D of the dip, the definition can be further expanded as [33]: The theoretical RI sensitivity and FOM as a function of silicon dioxide layer thickness were calculated in order to estimate the performance of the structure. The calculated RI sensitivity is shown in Figure 4a and FOM in Figure 4b for the realistic thicknesses of the chromium adhesion layer. Silicon dioxide thickness (nm) Based on the theoretical results, it is evident that increasing the silicon dioxide overlayer thickness leads to increased RI sensitivity and FOM and that the increase in the chromium adhesion layer thickness leads to a minor improvement in RI sensitivity. However, it is accompanied by a decrease in FOM, which is more pronounced for a thicker silicon dioxide overlayer. Furthermore, RI sensitivity and FOM were calculated as a function of the gold layer thickness. The dependences are shown in Figure 5a,b. It is evident that the considered angle of incidence greatly impacts both RI sensitivity and FOM. Moreover, it can be observed that RI sensitivity decreases with the increasing thickness of the gold layer, and that the FOM takes a maximum value, similarly as in [34], near the gold layer thickness of 39 nm. Fabrication of Structures Two variations of the structure with different thicknesses of the protective layer were manufactured. In the process, the substrates were coated homogeneously by a deposition technique based on radio frequency (RF) magnetron sputtering [35] with chromium, gold, and silicon dioxide. We have used a Cr target with 99.95% purity, and Au and SiO 2 targets with 99.99% purity. All targets were 152 mm in diameter. To deposit the Cr and Au layers, the working gas of argon was used under the deposition pressure of 0.150 Pa with forward RF power of 150 W and 100 W, respectively, at 13.56 MHz. SiO 2 coating was deposited in the mixture of Ar and O 2 under the total deposition pressure of 0.250 Pa and partial oxygen pressure of 0.022 Pa with forward RF power of 720 W at 13.56 MHz. To confirm the theoretical results, the method of spectral ellipsometry was utilized to determine the thicknesses of individual layers in the manufactured structures. Ellipsometry measurements were performed for the angles of incidence in a range of 40-70 • for both structures. The individual layer thicknesses obtained by the evaluation of the ellipsometric measurements are listed in Table 3. Responses of Real Structures Theoretical spectral reflectance responses in the Kretschmann configuration with a coupling BK7 prism were calculated for the angle of incidence θ = 68 • , when aqueous solutions of NaCl with concentrations in a range of 0-10 wt% with a step of 2 wt% were considered. The results are presented for both structures in Figure 6. Theoretical spectral reflectance ratios R p (λ)/R s (λ) were calculated using the thicknesses of individual layers obtained by the method of spectral ellipsometry. It can be observed that the calculated reflectance spectra show well-pronounced dips, whose width is nearly constant and which exhibit a shift towards longer wavelengths with the increasing RI of the analyte. The resonance wavelength as a function of the refractive index of the analyte and corresponding second-order polynomial fit for both structures is shown in Figure 7a. The shift in the resonant wavelength is approaching a value of 100 nm for the different refractive indices of the aqueous solutions of NaCl. To be more specific, for the refractive index values of the analyte in the range of 1.3330-1.3482, the resonant wavelengths in the range of 1220.5-1314.0 nm were calculated for the structure with the silicon dioxide layer thickness t SiO 2 = 147.5 nm (the first thickness), and the resonant wavelengths in the range of 1531.0-1632.8 nm were calculated for the structure with the silicon dioxide layer thickness t SiO 2 = 270.9 nm (the second thickness). The determined RI sensitivity at the angle of incidence θ = 68 • for both structures is shown in Figure 7b. For the first silicon dioxide overlayer thickness, the derived RI sensitivity varies in the range of 5150-7130 nm/RIU, and for the second one in the range of 5720-7660 nm/RIU. The sensitivity can be also expressed in terms of the mass fraction of the solute, which, for the first structure, varies in the range of 7.9 to 10.8 nm/wt%, and for the second structure in the range of 8.7 to 11.6 nm/wt%. The structure with the thicker layer of silicon dioxide is more sensitive to the RI of the analyte. The highest FOM calculated for the angle of incidence θ = 68 • and the first silicon dioxide layer thickness attains a value of 25.7 RIU −1 , and for the second silicon dioxide layer thickness attains a value of 38.0 RIU −1 . Experimental Analysis The experimental setup used to measure the reflectance response of the structure and the refractive index (RI) sensing ability in the NIR spectral range is shown in Figure 8. The setup consists of a polychromatic light source (HL-2000, Ocean Optics), a collimating lens, a polarizer (LP-VIS050, Thorlabs), an analyzer (LPVIS050, Thorlabs), a spectrometer (FT-NIR ARCoptix), and a computer. An angular rotation desk with a goniometer [36], which is used to adjust the angle of incidence, is not shown in Figure 8. It can be observed that a white light source (WLS)-a halogen lamp-is used with a polarizer oriented 45 • with respect to the plane of incidence, to generate the waves for p and s polarization, reaching the air/prism interface at external angle of incidence α. The light beam is then coupled into the SPR structure using the equilateral prism with the angle of incidence θ given by relation θ(λ) = 60 • − sin −1 [sin α/n BK7 (λ)], where n BK7 (λ) is the wavelength-dependent RI of the prism. The reflected light then goes through an analyzer oriented 0 • or 90 • with respect to the plane of incidence to generate the reflectances for either p or s polarization. The process of measurement consists of multiple steps. Firstly, using the optical fiber, the light is guided through the collimating lens so that the collimated light beam goes through the polarizer oriented 45 • with respect to the plane of incidence. Generated p and s components are then coupled by the equilateral prism into the SPR structure, which is attached to the rotary desk, allowing us to adjust different angles of incidence. For the p-polarized component, the ATR takes place at the prism/structure interface and the reflected light beam then goes through the analyzer. In the first part of the measurement, the analyzer is set to be 90 • with respect to the plane of incidence to generate the spectrum I re f s (λ). Initially, the spectrum is captured for air when no analyte is present. In the second part of the measurement, the analyte is applied, and the spectrum I p (λ) is captured for the analyzer oriented 0 • with respect to the plane of incidence. The corresponding obtained reflectance ratio is given as R p (λ)/R re f s (λ). The measurements of spectral reflectance ratio R p (λ)/R re f s (λ) for aqueous solutions of NaCl with concentrations in a range of 0-10 wt% with a step of 2 wt% were performed at a temperature of 22.8 • C, which was kept constant during the experiment. The analyte RIs n D at a wavelength of 589 nm (sodium D line) were measured by a digital refractometer (AR200, Reichert). The RIs were 1.3331, 1.3358, 1.3427, 1.3492, 1.3587, and 1.3599. Experimental Results and Discussion Measured spectral reflectance ratios R p (λ)/R re f s (λ) for the external angle of incidence α = −15.4 • (θ ≈ 70.2 • ) for both structures are shown in Figure 9. It can be observed that with the increasing thickness of silicon dioxide, the resonance shifts towards longer wavelengths. The resonance wavelength as a function of the refractive index of the analyte and the corresponding second-order polynomial fit for both structures is shown in Figure 10a. It is evident that the shift in the resonant wavelength is approaching a value of 200 nm for the different refractive indices of the aqueous solutions of NaCl. Specifically, for the refractive index values of the analyte ranging from 1.3331 to 1.3599, the resonant wavelengths in the range of 1226.7-1406.1 nm were obtained for the structure with the silicon dioxide layer thickness t SiO 2 = 147.5 nm, and the resonant wavelengths in the range of 1539.4-1731.3 nm were obtained for the structure with the silicon dioxide layer thickness t SiO 2 = 270.9 nm. The achieved RI sensitivity at the angle of incidence θ ≈ 70.2 • for both structures is shown in Figure 10b. For the first silicon dioxide overlayer thickness, the achieved RI sensitivity varies in the range of 4505-8140 nm/RIU, and for the second silicon dioxide overlayer thickness in the range of 5305-8690 nm/RIU. Measured spectral reflectance ratio R p (λ)/R re f s (λ) for a smaller angle of incidence was extended for the silicon dioxide layer thickness t SiO 2 = 147.5 nm only, because, for the silicon dioxide layer thickness t SiO 2 = 270.9 nm, some of the dips were outside of the measurement range of the spectrometer, and the results are shown in Figure 11a Figure 11. Measured spectral reflectance ratios R p (λ)/R re f s (λ) (a). RI sensitivity measured for a smaller angle of incidence (blue) and theoretical RI sensitivity for angle of incidence θ = 65.6 • (red) (b). The silicon dioxide layer thickness t SiO 2 = 147.5 nm. The normalized optical field intensity distribution in the analyte (water) at resonance wavelengths for different silicon dioxide layer thicknesses t SiO 2 and angles of incidence θ is shown in Figure 12b. It is evident that the penetration depth [37] of the optical field in the analyte (evanescent tail) is larger for the thicker silicon dioxide overlayer, thus justifying the higher sensitivity. Moreover, an increase in both the penetration depth in the analyte and sensitivity can be achieved by adjusting the angle of incidence, as demonstrated in Figure 12a,b for the silicon dioxide layer thickness t SiO 2 = 147.5 nm and the angle of incidence θ = 65.6 • . Figure 12. Comparison of RI sensitivity measured for a smaller angle of incidence (blue) and theoretical RI sensitivity (red) (a). Normalized optical field intensity distribution in the analyte (water) for different silicon dioxide layer thicknesses t SiO 2 and angles of incidence θ (b). Conclusions In this paper, highly sensitive plasmonic structures of a simple design were employed in the Kretschmann configuration for liquid analyte sensing. Two structures, comprising a glass substrate with a gold layer, two adhesion layers of chromium, and a silicon dioxide overlayer with distinct protective layer thicknesses, were analyzed theoretically and experimentally. The thicknesses of individual layers were determined by the method of spectral ellipsometry. The sensing properties of both structures were derived based on the wavelength interrogation method, utilizing the spectral reflectance response of the SPR structures. Theoretical spectral reflectances for aqueous solutions of NaCl with distinct concentrations were calculated for both structures, and the derived RI sensitivities varied within the ranges of 5150-7130 nm/RIU and 5720-7660 nm/RIU for the first and second structures, respectively. The sensitivity was also expressed in terms of the mass fraction of the solute and they varied within the ranges of 7.9-10.8 nm/wt% and 8.7-11.6 nm/wt% for the first and second structures, respectively. The highest FOM calculated attained a value of 38.0 RIU −1 . Furthermore, measurements of the spectral reflectance ratio for aqueous solutions of NaCl with distinct concentrations were performed for both structures. Achieved RI sensitivities varied within the ranges of 4505-8140 nm/RIU and 5305-8690 nm/RIU for the first and second structures, respectively. Extending the measurements for a smaller angle of incidence, the RI sensitivity of 8570 to 15,785 nm/RIU, and the FOM of 37.9 RIU −1 , were reached. The main advantage of the structure lies in its simple design. The protective layer of silicon dioxide ensures the repeatability of measurement and the durability of the structure. It enables also polymer diffractive structures to be included [38]. The use of the structure is applicable to a wide variety of both gaseous and aqueous analytes, where even more aggressive environments can be considered. Furthermore, based on the measurements, the structures exhibit high sensitivity to the refractive index of the analyte, which can be adjusted by choosing a suitable angle of incidence.
5,317.2
2022-09-01T00:00:00.000
[ "Physics" ]
Development and Application of Sub-Cycle Mid-Infrared Source Based on Laser Filamentation This paper is a perspective article which summarizes the development and application of sub-cycle mid-infrared (MIR) pulses generated through a laser filament. The generation scheme was published in Applied Sciences in 2013. The spectrum of the MIR pulse spreads from 2 to 50 μm, corresponding to multiple octaves, and the pulse duration is 6.9 fs, namely, 0.63 times the period of the carrier wavelength, 3.3 μm. The extremely broadband and highly coherent light source has potential for various applications. The light source has been applied for advanced ultrafast pump–probe spectroscopy by several research groups. As another application example, single-shot detection of absorption spectra in the entire MIR range by the use of chirped-pulse upconversion with a gas medium has been demonstrated. Although the measurement of the field oscillation of the sub-cycle MIR pulse was not trivial, the waveform of the sub-cycle pulse has been completely characterized with a newly developed method, frequency-resolved optical gating capable of carrier-envelope phase determination. A particular behavior of the spectral phase of the sub-cycle pulse has been revealed through the waveform characterization. Introduction Recently, generation of ultrafast mid-infrared (MIR) pulses is attracting more attention for advanced spectroscopy in the molecular fingerprint region and high harmonic generation.Optical parametric amplification (OPA) pumped by a near-infrared (NIR) ultrashort pulse from a Ti:sapphire or Yb laser is common for generation of ultrashort MIR pulses.OPA with solid nonlinear crystals is well-established and has been commercialized for a decade.However, the transmission range and phase-matching bandwidth of solid crystals are basically limited.Therefore, direct generation of a multi-octave supercontinuum or a sub-cycle pulse from an OPA in the wavelength range has not been realized yet. Four-wave mixing (FWM) with a gas medium is an alternative wavelength conversion scheme.It can be effective for ultimate broadband and short pulse generation because of the wide transmission range and low dispersion of the gas medium.In 2007, Fuji and Suzuki applied the FWM process for the generation of broadband MIR pulses.The fundamental (ω 1 ) and second harmonic (ω 2 ) of the 30-fs pulses from a Ti:sapphire amplifier were focused into air and produced broadband MIR pulses (ω 0 ) by four-wave mixing (ω 1 + ω 1 − ω 2 → ω 0 ) through filamentation.They succeeded in generating 1.3-cycle MIR pulses by using the scheme [1].The technology has been further developed by several groups [2][3][4][5][6][7][8], in particular, Fuji and Nomura have succeeded in generating sub-cycle MIR pulses by using the same scheme with nitrogen [7].The spectrum spreads from 2 to 50 µm and the pulse duration is 6.9-fs with the central frequency of 3.3 µm.The number of cycles is 0.63, which is well below single-cycle or even nearly half-cycle pulse. In this Perspective Article, we briefly summarize how the light source has been further developed and applied since the generation of the sub-cycle MIR pulse was reported in Applied Sciences [7]. Ultrafast Pump-Probe Spectroscopy The ultrabroadband MIR supercontinuum is very suitable for the detection of various vibrational modes of molecules and high-reflection bands caused by free carriers in solid materials.It is one of the ideal probe pulses for the studies of ultrafast dynamics in the fields of molecular science and solid state physics. Several groups demonstrated ultrafast pump-probe spectroscopy with the ultrabroadband MIR supercontinuum generated through filamentation.For example, structural dynamics of molecules in the femtosecond time scale was investigated with ultrafast pump-probe spectroscopy using the ultrabroadband MIR probe pulse [9][10][11], free-carrier dynamics in semiconductors were clearly observed in the very wide range of the probe energy [12][13][14], and two-dimensional spectroscopy with the ultrabroadband MIR probe was experimentally demonstrated [15,16]. Chirped Pulse Upconversion Single-shot detection of the ultrabroadband MIR supercontinuum with reasonable resolution is useful for advanced MIR spectroscopies.One of the most straightforward methods is to use a dispersive MIR spectrometer consisting of a grating and a multichannel MIR detector.However, the bandwidth of this method has been limited to ∼500 cm −1 due to the low pixel numbers and low sensitivity of the multichannel MIR detector.In addition, stray light from higher-order diffraction of the grating seriously disturbs broadband detection [17]. An alternative approach to detect the MIR supercontinuum with a single-shot is optically converting the spectra into the visible region and recording them with a visible spectrometer, which has much higher performance than the MIR spectrometers.Several groups demonstrated single-shot detection of the MIR supercontinuum by the use of chirped-pulse upconversion (CPU) with solid crystals [8,[18][19][20][21].Although it is possible to have efficient frequency conversion with solid crystals, the bandwidth of the detection is still limited to ∼600 cm −1 by the phase-matching condition. In order to detect the entire spectrum of the ultrabroadband MIR supercontinuum with single-shot detection, the authors proposed to use an FWM process in a gas medium for the upconversion of the MIR spectrum.An MIR pulse (ω 0 ) and a chirped pulse from a Ti:sapphire chirped-pulse amplifier system (ω 1 ) are focused into a gas medium and the MIR spectrum is upconverted to the visible (ω 2 ) through an FWM process (ω 1 + ω 1 − ω 0 → ω 2 ).By using the scheme, single-shot detection of the full MIR spectrum spread from 1.7 to 50 µm (from 200 to 6000 cm −1 ) has been achieved [22]. The authors combined the chirped-pulse upconversion technique with the femtosecond pump-probe spectroscopy [12,14] and attenuated total reflection spectroscopy (ATR) [23,24].As a demonstration of the ATR combined with CPU, the MIR absorption spectra of acetic acid (CH 3 COOH, >99%) on the ATR prism after adding some magnesium (Mg) were monitored.One thousand spectra were recorded with the interval of 100 ms. Figure 1 shows the measured absorption spectra.The intensities of the absorption lines due to C=O (1700 cm −1 ) and OH (3000 cm −1 ) decrease.In contrast, the absorption lines due to COO − stretching (symmetric: 1430 cm −1 , asymmetric: 1550 cm −1 ) and the weak bonding between Mg and acetate (640 cm −1 ) increase.Therefore, it is obvious that the simple chemical reaction of acetic acid with metallic magnesium which causes the formation of magnesium acetate along with the release of hydrogen gas is clearly observed in real-time by monitoring the change of the MIR spectrum.It is also important to note that the authors had to neither repeat the experiment nor average the signal. We believe that this unique system is effective for advanced studies in various scientific fields.In the near future, we would like to apply the ATR system to investigate the oxygen generation process of the metallic complex and anion-binding reaction dynamics of proteins.single-shot, the authors proposed to use an FWM process in a gas medium for the upconversion of 64 the MIR spectrum.An MIR pulse (ω 0 ) and a chirped pulse from a Ti:sapphire chirped-pulse amplifier 65 system (ω 1 ) are focused into a gas medium and the MIR spectrum is upconverted to the visible (ω 2 ) 66 through an FWM process (ω 1 + ω 1 − ω 0 → ω 2 ).By using the scheme, single-shot detection of the full 67 MIR spectrum spread from 1.7 to 50 µm (from 200 to 6000 cm −1 ) has been achieved [22]. Carrier-Envelope Phase of the Sub-Cycle MIR Pulses After the publication of the sub-cycle MIR pulse generation scheme in 2013, the authors have developed a new waveform characterization scheme, frequency-resolved optical gating capable of carrier-envelope phase determination (FROG-CEP), and succeeded in characterizing the field oscillation of the sub-cycle MIR pulse [25,26]. In our previous paper published in Applied Sciences [7], we have shown that the generated spectrum is highly sensitive to the delay between the fundamental and second harmonic pulses from the Ti:sapphire laser.Later, we also investigated the delay dependence of the phase by using the FROG-CEP [27].The phase change due to the delay can be explained by the interference between the two parametric processes, which is consistent with the delay dependence of the power spectrum reported in [7].One example of the phase control by changing the delay between the fundamental and second harmonic pulses is shown in Figure 2. Version August 11, 2017 submitted to Appl.Sci.When the delay between the fundamental and second harmonic pulses is changed by ∼200 nm, the phase of the field changes by π. The authors combined the chirped-pulse upconversion technique with the femtosecond Conclusions For several years since the publication of the sub-cycle MIR pulse generation, some intriguing applications of the light source have already been demonstrated by several scientific groups.Thanks to the new waveform characterization scheme, the field oscillation of the sub-cycle pulse has become clear.The details of the generation and characterization of the sub-cycle MIR pulse are summarized in a review paper published by the authors [28]. In pump-probe experiments, the broadband MIR pulse has been used as a probe pulse so far.The next stage of the application of the light source is to use the pulse as a pump pulse, which would initiate interesting nonlinear phenomena.The well-characterized sub-cycle-oscillation field is ideal for studies of field-sensitive attosecond phenomena.In particular, high harmonic generation in solids is one of the most interesting applications of the light source. Needless to say, increase in the pulse energy of the sub-cycle pulse is important for using the pulse as a pump pulse.Unfortunately, the intensity of the MIR pulse (∼0.5 µJ) is too low for most of the experiments related to high-field physics in general.In particular, the ring-shaped beam profile makes it difficult for high intensity to be the focus. However, if the beam for the generation of the sub-cycle pulse has a wavelength longer than that of Ti:sapphire lasers, the beam profile should be improved.According to our simple calculations, better beam quality should be achieved due to the smaller phase mismatch for the longer pump wavelength [28].At the same time, the smaller phase mismatch will result in better conversion efficiency.Recently, ultrabroadband MIR supercontinuum generation though filamentation based on a Yb laser system was demonstrated [29].The beam profile was still ring shaped.If we can use even longer wavelength, namely ∼2 µm, the beam profile and conversion efficiency would become better. We believe that the light source has great potential for various applications and can contribute to the progress in ultrafast laser science and technology. Figure 1 . Figure 1.Snapshots of the MIR absorption spectrum of acetic acid during the chemical reaction with metallic magnesium. Figure 1 . Figure 1.Snapshots of the mid-infrared absorption spectrum of acetic acid during the chemical reaction with metallic magnesium. Figure 2 . Figure 2. (a) A typical electric field of the sub-cycle MIR pulse measured by the use of FROG-CEP.(b)When the delay between the fundamental and second harmonic pulses is changed by ∼200 nm, the phase of the field changes by π. Figure 2 . Figure 2. (a)A typical electric field of the sub-cycle mid-infrared pulse measured by the use of frequency-resolved optical gating capable of carrier-envelope phase determination; (b) When the delay between the fundamental and second harmonic pulses is changed by ∼200 nm, the phase of the field changes by π.
2,576
2017-08-19T00:00:00.000
[ "Physics", "Engineering" ]
Molecular characterization of a potential receptor of Eimeria acervulina microneme protein 3 from chicken duodenal epithelial cells Eimeria acervulina is one of seven Eimeria spp. that can infect chicken duodenal epithelial cells. Eimeria microneme protein 3 (MIC3) plays a vital role in the invasion of host epithelial tissue by the parasite. In this study, we found that chicken (Gallus gallus) ubiquitin conjugating enzyme E2F (UBE2F) could bind to the MIC3 protein of E. acervulina (EaMIC3), as screened using the yeast two-hybrid system, and that it might be the putative receptor protein of EaMIC3. The UBE2F gene was cloned from chicken duodenal epithelial cells. The recombinant protein of UBE2F (rUBE2F) was expressed in E. coli and the reactogenicity of rUBE2F was analyzed by Western blot. Gene sequencing revealed that the opening reading frame (ORF) of UBE2F was 558 base pairs and encoded a protein of 186 amino acids with a molecular weight of 20.46 kDa. The predicted UBE2F protein did not contain signal peptides or a transmembrane region, but had multiple O-glycosylation and phosphorylation sites. A phylogenetic analysis showed that the chicken UBE2F protein is closely related to those of quail and pigeon (Coturnix japonica and Columba livia). A sporozoite invasion-blocking assay showed that antisera against rUBE2F significantly inhibited the invasion of E. acervulina sporozoites in vitro. Animal experiments indicated that the antisera could significantly enhance average body weight gains and reduce mean lesion scores following a challenge with E. acervulina. These results therefore imply that the chicken UBE2F protein might be the target receptor molecule of EaMIC3 that is involved in E. acervulina invasion. Introduction Avian coccidiosis is caused by intestinal infection with single or multiple Eimeria spp. and results in huge production losses globally [14,22]. Eimeria acervulina is one of seven Eimeria spp., and it infects chicken duodenal epithelial cells resulting in malabsorption, poor feed utilization, and reduced body weight gains (BWGs) [33,35]. Eimeria spp. are site-specific when invading and reproducing in the chicken intestine. For instance, Eimeria tenella infects the caecum, Eimeria acervulina infects the duodenum, and Eimeria maxima infects the jejunum [17]. However, to date, the molecular mechanisms of invasion and the site-specificity of Eimeria spp. have not been elucidated. Recently, it has been reported that molecules on the surface of intestinal epithelial cells, which act as receptors or recognition sites for sporozoite invasion, result in the invasion and site specificity [1,7]. Furthermore, it has been confirmed that EtMIC3 of E. tenella plays a key role in invasion and site specificity [21]. It has also been reported that E. acervulina MIC3 (EaMIC3) and E. mitis (EmMIC3) are expressed in the sporozoite and merozoite stages, localized at the parasite apex, and could significantly protect chickens from E. acervulina infection [14,36]. These findings show that the Eimeria MIC3 proteins are the key molecules associated with invasion and site specificity. However, no studies regarding E. acervulina invasion receptors have been reported. In the current study, the ubiquitin conjugating enzyme E2F (UBE2F) protein of chicken duodenal epithelial cells was identified to potentially interact with EaMIC3, as screened using the yeast twohybrid system. Furthermore, the UBE2F gene was obtained by PCR amplification and expressed in a prokaryotic expression system. Invasion inhibition by antiserum against rUBE2F on E. acervulina sporozoites was observed through sporozoite invasion-blocking assays and chicken challenge experiments. Ethics approval The study was reviewed and approved by the Science and Technology Agency of Jiangsu Province. The approval ID is SYXK (SU) 2010-0005. Experimental chickens and parasites Eimeria-free Hy-Line layer one-day-old chicks were provided with ad libitum feed and water without anticoccidial drugs. Eimeria acervulina, Jiangsu strain, was reproduced and maintained in the Laboratory of Veterinary Parasite Disease, Nanjing Agricultural University, China. Sporozoites from E. acervulina oocysts were purified on DE-52 anion-exchange columns using a previously described protocol [34]. Isolation and identification of chicken duodenal epithelial cells The duodenal epithelial cells of two-week-old chicks were isolated as previously described [34]. Coccidian-free chicks were emerged in 70% ethanol after they were killed by exsanguination. Five minutes later, the duodenums were dissected using scissors and placed into Hanks' balanced salt solution (HBSS; PAA Laboratories, Linz, Austria). Subsequently, the duodenums were washed with HBSS until the mucus was completely removed. Following dissection of the mucosa into small strips (3  20 mm 2 ), the strips were placed into 1 mM DTT (Sigma-Aldrich, Taufkirchen, Germany) in 50 mL HBSS (30 min at ambient temperature). Sequentially, the mucosal strips were incubated in 1 mM EDTA (Sigma) for 10 min at 37°C. Mucosal strips were briefly rinsed in HBSS to eliminate already detached duodenal epithelial cells and transferred to fresh HBSS at ambient temperature, followed by 5-10 vigorous shakes of the container. This procedure led to instant detachment of duodenal epithelial cells in a full-length crypt formation. After rapid removal of the mucosal strips by passing the solution over a coarse mesh (400 lm, Rotilabo sieve; Carl Roth GmbH, Karlsruhe, Germany), rapid purification of detached duodenal epithelial cells was achieved using a mesh filter (80 lm pore size; Sefar, Kansas City, KS, USA) fixed with tape to a plastic ring (5 cm diameter, 2 cm height, and 3 mm thickness). The suspension containing duodenal epithelial cell crypts was gently but rapidly passed over the mesh to separate the cell crypts from single cells (erythrocytes, leukocytes, fibroblasts, etc.), which easily passed through the filter. The filter was then rapidly inverted, and purified intact duodenal epithelial cell crypts were immediately washed out with Dulbecco's Modified Eagle Medium (DMEM) (Gibo Ò , Life Technologies, MD, USA) at ambient temperature. The duodenal epithelial cell crypt solution was then rapidly transferred to an ECM-coated culture dish and cultured at 41°C and 5% CO 2 for 1.5 h. The nonadherent cells were collected for identification of the duodenal epithelial cells and construction of a cDNA library. Duodenal epithelial cells were identified by cell alkaline phosphatase (cAKP) stain (Azo-coupling method). The separated duodenal epithelial cells were fixed on a polylysine-coated cover slip and the slip was washed three times with 0.1 M PBS (pH 7.2). The duodenal epithelial cells were stained using a cAKP kit (JianCheng, Nanjing, China), according to the manufacturer's instructions. RNA extraction Total RNA was extracted from the E. acervulina sporozoites and the chicken intestinal epithelial cells using an E.Z.N.A. Ò Total RNA Maxi Kit (OMEGA, Norcross, GA, USA), according to the manufacturer's instructions. The quantity of RNA was estimated by spectrophotometry and samples with a ratio OD260/OD280 between 1.9 and 2 were used. RNase-free DNase I (TaKaRa, Clontech Laboratories, CA, USA) was used to remove the genomic DNA contamination in the prepared RNA samples. Subsequently, a SMART cDNA Library Construction Kit (TaKaRa, Clontech Laboratories, CA, USA) was used to reverse transcribe RNA into double-stranded cDNA, according to the manufacturer's instructions. The double-stranded cDNA was normalized using a Trimmer-2 cDNA normalization kit (Evrogen, Moscow, Russia), according to the manufacturer's instructions. A MiniBest DNA Fragment Purification Kit (TaKaRa) was used to purify cDNA, according to the manufacturer's instructions. CHROMA SPINTM-1000 (Clontech Laboratories, CA, USA) was used to select the cDNA greater than 0.5 kb. The cDNA library was created by using a SMART cDNA Library Construction Kit (TaKaRa, Clontech Laboratories) and cloned into the pray plasmid pPR3-N. To confirm the size of clone inserts, plasmid DNA was extracted from 32 clones randomly and digested using restriction enzyme Sfi I. Ninety-six monoclones were randomly selected for analysis of homogenization by sequencing. Identification of binding partners for EaMIC3 using yeast two-hybrid (YTH) screening A DUALhunter starter kit (Dualsystems Biotech, Schlieren, Switzerland) was used to identify the EaMIC3 binding molecule from chicken duodenal epithelial cells by YTH screening [6]. The bait plasmid pDHB1-EaMIC3 was transformed into yeast NMY51. After confirming the expression of the bait and functional assay and optimizing the screening stringency, the plasmid pDHB1-EaMIC3 was used to screen a chicken duodenal epithelial cell cDNA library to identify the proteins interacting with EaMIC3. Positive colonies were selected and the plasmids were extracted using a Yeast Plasmid Extraction Kit (Omega-Bio-tek, Norcross, GA, USA). The selected prey plasmids were transformed into E. coli DH5a and recovered by ampicillin selection. The pPR3N-F and pPR3N-R primers were used to detect the inserted fragments in the selected prey plasmid gene using PCR. Then the isolated positive prey plasmids were retransformed into yeast NMY51, which contained the bait plasmid pDHB1-EaMIC3 to eliminate false positives. LargeT was used as a bait control and Alg5 fused to NubG or NubI was used as the negative or positive prey control, respectively. The inserted fragments in these prey plasmids were sequenced and the DNA sequences were used to search GenBank. Cloning of the UBE2F gene Total RNA of chicken intestinal epithelial cells was reverse transcribed into cDNA as a template. Specific primers were designed and synthetized to amplify the ORF of UBE2F (EcoR I anchored forward primer: 5 0 -CGGAATTCTGCT-CACTCTGGCAAGCAA -3 0 ; Xh I anchored reverse primer: . The amplified UBE2F gene was ligated with pMD-19T cloning vector (TaKaRa, Dalian, China) and transformed into E. coli DH5a competent cells (Vazyme Biotech Co., Ltd., Nanjing, China). Subsequently, clones of UBE2F were checked by sequence confirmation through the online database (https:// blast.ncbi.nlm.nih.gov/Blast.cgi). Expression and purification of recombinant UBE2F protein The identified recombinant plasmid pMD-19T-UBE2F was digested by endonuclease EcoR I and Xho I. Subsequently, the target fragment was inserted into the pET-32a expression vector and transformed into E. coli BL21 (DE3) competent cells (Vazyme biotech Co., Ltd., Nanjing, China). Positive clones were selected and identified by PCR, endonuclease digestion, and DNA sequencing. The recombinant UBE2F protein (rUBE2F) was expressed in E. coli BL21 and purified using a Ni 2+ -nitrilotriacetic acid (Ni-NTA) column (GE Healthcare, Chicago, IL, USA). The purified protein was determined using 12% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and the concentration of the rUBE2F was determined by the Bradford procedure using bovine serum albumin (BSA) as a standard. The purified protein was stored at À70°C until use. Meanwhile, the pET-32a fusion protein was prepared by the same method. Preparation of chicken antiserum against rUBE2F To generate chicken antiserum against rUBE2F, two-weekold chicks were vaccinated with 200 lg purified rUBE2F by intramuscular injection into their thighs and the chicks were given four booster vaccinations at intervals of seven days. Finally, the antiserum was collected and stored at À70°C. Chick antiserum against the pET-32a fusion protein was prepared by the same method and negative chick serum was collected simultaneously. The antibody titer was determined by ELISA. Western blot analysis of rUBE2F [15] To evaluate the inhibitive effects of anti-rUBE2F serum on the invasion of E. acervulina in vitro, the sporozoites from E. acervulina oocysts were cleaned and sporulated, as previously described [18]. Two-week-old chicks were randomly divided into five groups of five. The duodenal sections (5 cm lengths) were collected and preserved in preheated HBSS at 41°C. One end of each duodenum section was ligated and 1.0  10 6 E. acervulina sporozoites were used to infect the section. Meanwhile, chicken anti-rUBE2F serum was diluted with PBS at a ratio of 1:5 and was added into the duodenum sections and mixed with the sporozoites, followed by ligation of the second end of the section. Chicken antiserum against the pET-32a protein and the chicken negative serum at the same dilution were used as controls, following the same method. Duodenums were then incubated in preheated PBS at 41°C. After 20 min, the effluents were collected by washing the sections with PBS. Sporozoites in the effluents were counted and the invasion inhibition rates of the antisera were calculated using the following equation: Sporozoite invasion inhibition rate ¼ number of sporozoites in the effluent= total number of infected sporozoites  100%: Protective effects of anti-rUBE2F serum on chicks challenged with E. acervulina Four-week-old chicks of similar weight were randomly divided into five groups of 15. Each group, with the exception of the unchallenged control group, were infected with 1.2  10 5 E. acervulina sporulated oocysts by oral gavage. The unchallenged control chicks were given the same volume of PBS by oral gavage. At the same time, 0.1 mL of chicken antiserum against rUBE2F diluted with PBS at a ratio of 1:5 was injected intravenously into the wings of the experimental group once a day for six days [8,31]. The chicken antiserum against pET-32a vector protein and the chicken negative serum were injected by the same method, as the control groups. On Day 7, all the chicks were humanely killed and body weight gains and lesion scores were evaluated. Chick enteric contents were collected separately to evaluate the number of oocysts per gram feces (OPG) using a McMaster counting chamber, as previously described [36]. Statistical analysis One-way analysis of variance (ANOVA) with Duncan's multiple range tests were used for the determination of statistical significance by using the SPSS statistical package (SPSS for Windows 16, SPSS Inc., Chicago, IL, USA). Differences among groups were tested and p < 0.05 was considered to indicate a significant difference. Identification of the bait vector and the cDNA library of chicken duodenal epithelial cells The EaMIC3 gene in bait vector pDHB1-EaMIC3 was obtained by RT-PCR and the target fragment size of 2607 bp (Fig. 1A) was confirmed through restriction enzyme digestion (Fig. 1B). Sequence analysis also confirmed the insert was EaMIC3 ORF, indicating successful construction of the bait vector pDHB1-EaMIC3. The chick duodenal epithelial cells were isolated, identified by cAKP staining, and used to construct a cDNA library. The cDNA library contained at least 4  10 6 primary recombinants, and the average insert size was 1.0 kb (Fig. 2). In addition, 96 monoclones were randomly selected for sequencing and the results showed that 0 of 96 monoclones were redundant, indicating good homogenization. UBE2F is a binding protein for EaMIC3 In the YTH screening, 37 clones encoding proteins that showed a potential interaction with the EaMIC3 proteins in yeast cells were identified. Multiple potential EaMIC3-interacting proteins were identified in the retest, and 17 clones were obtained. These genes were then identified by DNA sequencing and searching of GenBank. One gene was determined to be UBE2F (NCBI accession number XM_013178750.1). Cloning of the UBE2F gene and sequence analysis of UBE2F The ORF of UBE2F in plasmid pET-32a-UBE2F was obtained by RT-PCR (Fig. 3A), and a target fragment with a size of 558 bp (Fig. 3B) was identified by enzymatic digestion. Sequence analysis showed that the vector insert was the ORF of UBE2F. This result indicated that the prokaryotic expression vector pET-32a-UBE2F was constructed correctly. The ORF was predicted to encode a 186-amino acid protein with a molecular weight of 20.5 kDa and a pI of 6.50. The predicted UBE2F protein did not contain a signal peptide or transmembrane region. One N-glycosylation site, four O-glycosylation, and 19 phosphorylation sites were found in the predicted protein, but no GPI anchor could be detected. As shown in Figure 4A, the protein had six hydrophilic regions including 6-33, 54-64, 72-86, 103-112, 122-138, and 151-186, and six high antigenic indices and consecutive regions including 6-46, 55-63, 70-100, 107-114, 120-138, and 151-186. Moreover, most regions of the UBE2F protein were hydrophilic plots and flexible regions. The phylogenetic tree of amino acid sequences was built using MEGA4.0 (https://www.megasoftware.net/) and the cladogram result (Fig. 4B) showed that kinship of the UBE2F protein was lowly related in poultry and wildfowl. Expression and purification of rUBE2F and pET-32a proteins The rUBE2F was expressed in E. coli BL21 (DE3) and purified in a Ni-NTA column. The size of rUBE2F was consistent with the molecular weight sum of fusion protein of the pET-32a vector (18 kDa) and UBE2F (21 kDa) and exhibited a single band in SDS-PAGE gel with a molecular weight of around 39 kDa (Fig. 5A). Immunoblot for the recombinant protein Western blot showed that the rUBE2F could be recognized by anti-rUBE2F chicken serum, but could not be recognized by normal chicken serum (Fig. 5B). The antibody titer of chicken anti-rUBE2F was 2 10 , and this could be used for subsequent research. Inhibition of sporozoite invasion by antisera against rUBE2F in vitro The in vitro inhibition of E. acervulina sporozoite invasion by antisera against rUBE2F is shown in Figure 6. As compared with anti-pET-32a, negative serum, and the PBS control group, the anti-UBE2F group significantly reduced the efficiency of E. acervulina sporozoite invasion (p < 0.01). No significant differences were observed among anti-pET32a, negative serum, and PBS control groups (p > 0.05). These results indicate that antiserum against rUBE2F was effective in inhibiting the invasion of E. acervulina sporozoites in the duodenum in vitro. The protective efficacy of antisera against rUBE2F in challenged chicks The efficacy of invasion inhibition of antiserum against rUBE2F is shown in Table 1. No chicks died following the E. acervulina challenge in any group during the experimental trial. As compared with anti-pET32a protein, negative serum, and the challenged control group, chicks in the anti-rUBE2F group showed significantly increased body weight gains and decreased lesion scores (p < 0.01). No significant differences were observed between the anti-UBE2F and the anti-pET32a protein, negative serum, and challenged control group regarding oocyst output (p > 0.05). These results indicate that antiserum against rUBE2F partially mitigate invasion inhibition against an infection challenge by E. acervulina. Discussion Coccidiosis is a deadly disease that hampers the productivity and welfare of commercial chicken enterprises. Thus, the disease is a major threat to the global poultry industry [12,30,32,37]. Seven known species of Eimeria cause coccidiosis in chickens by affecting the different parts of the intestinal tract in a site-specific manner [10,23]. Growing evidence indicates that molecular interactions between Eimeria sporozoites and host cells provide a prelude to, and result in, site-specific invasion [1]. It is suggested that the proteins secreted from apicomplexan microneme organelles (MICs) of the sporozoites allow parasites to bind a diverse range of host cell oligosaccharide epitopes and play important roles in parasite adhesion to and invasion of host cells [5]. In Toxoplasma gondii and E. tenella, the sialic-acid binding MAR (microneme adhesive repeat) domain in the MICs was shown to make a significant contribution to different host and tissue tropisms [25]. The dual immunofluorescence staining of E. tenella microneme 3 (EtMIC3) and 5 (EtMIC5) on fixed and permeabilized sporozoites of E. tenella showed that EtMIC3 was located mainly at the apical tip of the sporozoite, while the majority of EtMIC5 labeling was detected just posterior to this region [8]. Moreover, EtMIC3 could bind to sialic acid-bearing molecules on the epithelial cell surface of the host, and played a key role in sporozoite invasion [19,20]. Eimeria acervulina infects the duodenal epithelium of chickens, which results in morphological and functional damage, leading to a reduction in nutrient digestion and growth performance in broilers [9]. Previous research has shown that EaMIC3 is expressed in the sporozoite and merozoite stages of E. acervulina and could protect chickens from E. acervulina infection [36]. Thus, it might also play an important role in the specificity of invasive and parasitic sites [1,28]. Although many invasion-related molecules of Eimeria have been studied [13,25], there are only a few reports concerning sporozoite receptors on host epithelia. In the current study, a cDNA library of chicken duodenal epithelial cells was constructed and screened for EaMIC3 receptor molecules by YTH. Our results show that the UBE2F from chick duodenal epithelial cells could interact with EaMIC3 and that antiserum against rUBE2F significantly inhibited the invasion of E. acervulina sporozoites in vitro, and could significantly enhance average BWGs and reduce mean lesion scores after a challenge with E. acervulina in vivo. These results suggest that the chicken protein UBE2F might be the target receptor molecule of EaMIC3 involved in the invasion of E. acervulina. Ubiquitin-conjugating enzymes, also known as E2 enzymes and as ubiquitin-carrier enzymes, perform the second step in the . The inhibition of Eimeria acervulina sporozoite invasion by chicken antisera against rUBE2F. The inhibitive ratio was calculated and expressed as mean ± SD. In each column there is a significant difference (p < 0.01) between means and ranks with different letters, and no significant difference (p > 0.05) between means and ranks with the same letter. ubiquitination reaction that targets a protein for degradation via the proteasome [2,26]. UBE2F plays a specific role in the regulation of ubiquitin chain assembly and topology and the initiation or elongation of a ubiquitin chain [16,24,27]. Interaction between EaMIC3 and UBE2F might induce ubiquitination of membrane proteins in host cells, leading to cell breakdown, thus achieving E. acervulina sporozoite invasion into host cells and pathogenesis [11]. In the current study, the complete gene sequence of chicken UBE2F was successfully obtained using PCR. The nucleic acid sequence of UBE2F showed that it contained a 558 bp ORF encoding a protein of 186 amino acids. The molecular mass of the deduced translation product was about 20.5 kDa. The predicted UBE2F protein did not contain a signal peptide or transmembrane region but did contain multiple O-glycosylation and phosphorylation sites. The modulation of glycosylation and phosphorylation to proteins is required for physiological functions. The process of O-glycosylation involves the addition of N-acetyl-galactosamine to serine or threonine residues by N-acetylgalactosaminyltransferase, followed by other carbohydrates such as galactose and sialic acid [29]. EtMIC3 has high specificity for sialylated glycan, and it contains several sialicacid binding MARs. The presence of multiple O-glycosylation and phosphorylation sites in UBE2F indicated that sialic acids could be added to UBE2F by O-glycosylation and phosphorylation; this supports the conjecture that UBE2F is the receptor of EaMIC3 involved in the invasion of E. acervulina. The salivary glands of Aedes aegypti mosquitoes contain the receptor of the malaria sporozoite, and antiserum against the salivary gland could block sporozoite invasion in vivo [3]. Monoclonal antibodies against sporozoite receptors could also inhibit the invasion of salivary glands by Plasmodium yoelii [4]. In this study, antiserum against rUBE2F significantly inhibited E. acervulina sporozoite invasion in vitro and in vivo. These results suggest that UBE2F plays an important role as the EaMIC3 receptor in E. acervulina invasion into host cells. However, the antiserum against rUBE2F did not completely inhibit in vitro and in vivo invasion, which suggests that there might be other molecules involved in the invasion of E. acervulina into host cells, or that the antiserum dose was insufficient. This needs to be investigated further. Conclusion In this study, the EaMIC3 receptor molecule, UBE2F, was identified by YTH, and the molecular characterization of UBE2F was analyzed. All the results imply that EaMIC3 and the receptor protein UBE2F might be the target molecules involved in E. acervulina invasion.
5,226.2
2020-03-20T00:00:00.000
[ "Biology", "Medicine" ]
MEKK4 is an effector of the embryonic TRAF4 for JNK activation. TRAF4 has previously been shown to activate JNK through an unknown mechanism. Here, we show that endogenous TRAF4 and MEKK4 associate in both human K562 cells and mouse E10.5 embryos. TRAF4 interacts with the kinase domain of MEKK4. However, this association does not require MEKK4 kinase activity. The interaction of MEKK4 and TRAF4 are further demonstrated by the colocalization of TRAF4 and MEKK4 in cells. Importantly, although TRAF4 has little or no ability to activate JNK independently, coexpression of TRAF4 and MEKK4 results in synergistic activation of JNK that is inhibited by a kinase-inactive mutant of MEKK4, MEKK4K1361R. MEKK4 binds the TRAF domain of TRAF4 and MEKK4/TRAF4 activation of JNK is inhibited by expression of the TRAF domain. Furthermore, TRAF4 stimulates MEKK4 kinase activity by promoting MEKK4 oligomerization and JNK activation can be stimulated by chemical induction of MEKK4 dimerization. The findings identify MEKK4 as the MAPK kinase kinase for TRAF4 regulation of the JNK pathway. TRAF4 has previously been shown to activate JNK through an unknown mechanism. Here, we show that endogenous TRAF4 and MEKK4 associate in both human K562 cells and mouse E10.5 embryos. TRAF4 interacts with the kinase domain of MEKK4. However, this association does not require MEKK4 kinase activity. The interaction of MEKK4 and TRAF4 are further demonstrated by the colocalization of TRAF4 and MEKK4 in cells. Importantly, although TRAF4 has little or no ability to activate JNK independently, coexpression of TRAF4 and MEKK4 results in synergistic activation of JNK that is inhibited by a kinase-inactive mutant of MEKK4, MEKK4 K1361R . MEKK4 binds the TRAF domain of TRAF4 and MEKK4/ TRAF4 activation of JNK is inhibited by expression of the TRAF domain. Furthermore, TRAF4 stimulates MEKK4 kinase activity by promoting MEKK4 oligomerization and JNK activation can be stimulated by chemical induction of MEKK4 dimerization. The findings identify MEKK4 as the MAPK kinase kinase for TRAF4 regulation of the JNK pathway. MAP 2 kinase (MAPK) pathways are critical regulators of numerous cellular functions including cell proliferation, adhesion, migration, differentiation, and apoptosis. MEKK4 is a 180-kDa MAPK kinase kinase (MAP3K) that phosphorylates and activates the MAPK kinases (MAP2K) MKK3, MKK4, MKK6, and MKK7. MKK4 and MKK7 phosphorylate and activate the MAPK JNK, whereas MKK3 and MKK6 phosphorylate and activate the MAPK p38. Both the upstream signaling molecules and the biochemical mechanisms that regulate MEKK4 activation of JNK are poorly defined. TRAF (TNF receptor-associated factor) family proteins play important roles in development and immunity (1,2). TRAF proteins function in part as scaffolds to organize signaling complexes coupled to specific receptors at the plasma membrane. There are six proteins in mammalian genomes that encode TRAF domains near their C terminus that mediate oligomerization of TRAF proteins and their association with specific membrane proteins. The N terminus of TRAF proteins organize signaling complexes involved in the regulation of MAPK and NF-B activation (2). Mice with targeted gene disruption of TRAF2, -3, and -6 die shortly after birth, while disruption of the TRAF4 gene is embryonic lethal (3,4). TRAF4-deficient mouse embryos display impaired neural tube closure defects and skeletal malformations similar to the MEKK4 K1361R knock-in mouse (4,5). TRAF4 is an atypical member of the TRAF family of proteins (1, 6). Its expression is strongest during development suggesting an important role for TRAF4 in the developing embryo (4,7). Similar to TRAF4, MEKK4 is expressed at low levels in adult tissues but is expressed strongly during development (5,8). TRAF4 has been shown to promote JNK activation, however the signaling pathways leading to JNK activation have not been defined (9). Therefore, we investigated the role of MEKK4 in TRAF4-mediated signaling. We have discovered that TRAF4 is a binding partner for MEKK4. Their association occurs through the kinase domain of MEKK4 and the TRAF domain of TRAF4. TRAF4 increases MEKK4 kinase activity by promoting the oligomerization of MEKK4 and enhances MEKK4 signaling to JNK. MATERIALS AND METHODS Cell Lines, Culture Conditions, and Transfections-K562 cells were cultured in RPMI 1640 containing 10% fetal bovine serum, 1% penicillin and streptomycin. E10.5 embryos were isolated from timed matings of 129 SvEv mice according to university and federal guidelines for the use of animals. 293 cells and COS-7 cells were cultured in Dulbecco's modified Eagle's high glucose medium supplemented with 10% fetal bovine serum, 1% penicillin and streptomycin. Transfections of 293 cells were performed in 60-mm dishes using Lipofectamine Plus (Invitrogen) according to the manufacturer's specifications for 24 -36 h. COS-7 cells were plated on coverslips and transfected as described for 293 cells. Plasmids-Human TRAF4 cDNA was kindly provided by Nancy Raab-Traub (University of North Carolina, Chapel Hill). HA tagged full-length TRAF4 and the TRAF domain of TRAF4 (amino acids 308 -470) in pCDNA3 were a kind gift from Dale Bredesen (Buck Center for Research in Aging). The TRAF4 N terminus (amino acids 1-307) and the TRAF4 Ring deletion mutant (amino acids 58 -470) in pCDNA3 were kindly given by Wafik El-Deiry (Howard Hughes Medical Institute and University of Pennsylvania). HA-tagged JNK, HA-tagged full-length MEKK4, and HA-tagged MEKK4 kinase domain (amino acids 1301-1597) were as described previously (10). FLAG-tagged wild-type MEKK4 was constructed by subcloning MEKK4 into FLAG-pCDNA3. The kinase-inactive MEKK4, MEKK4 K1361R , was constructing using PCR to replace the active site lysine with an arginine, producing a kinase-inactive MEKK4 that was verified by sequencing. FK506-binding protein (FKBP) MEKK4 fusion constructs were created by introduction of SalI sites on the wild-type MEKK4 kinase domain by PCR, and products were subcloned into pSH1/S N -E-F V Ј-F Vls -E, a gift from David Spencer (Baylor) containing a point mutation Phe 36 3 Val (F V ) engineered in the FKBPs resulting in 1000ϫ higher binding affinity of the synthetic dimerizer AP20187 compared with the wild-type FKBP (11). Two F V sites were inserted on the N terminus of the kinase domain of MEKK4 and a C-terminal HA tag. Immunofluorescence-For TRAF4 and MEKK4 staining, COS-7 cells plated on coverslips were fixed for 10 mins in 3% paraformaldehyde containing 3% sucrose and phosphate-buffered saline pH 7.4 and were permeabilized for 6 min in 0.1% Triton in phosphate-buffered saline. Coverslips were washed, blocked in 10% donkey serum, and incubated for 1 h with murine anti-FLAG and goat anti-TRAF antibodies diluted 1:500. Coverslips were washed and incubated with DAPI (0.04 ng/ml), Cy3 donkey anti-goat diluted 1:500, and fluorescein isothiocyanate donkey anti-mouse. Imaging was performed using an Axiovert 200 M microscope from Zeiss. Imaging software from Intelligent Imaging Innovations (Denver, Colorado) was used to perform nearest neighbors deconvolution on 0.1-m sections. * This work was supported by National Institutes of Health Grants DK37871 and GM30324. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 RESULTS AND DISCUSSION TRAF4, an Atypical Embryonic TRAF, Binds MEKK4-Immunoprecipitation assays using K562 cells that endogenously express high levels of both TRAF4 and MEKK4 demonstrated that endogenous MEKK4 was specifically coprecipitated with TRAF4 (Fig. 1A). Similarly, immunoprecipitation of endogenous MEKK4 from K562 cells revealed an association of TRAF4 (data not shown). Both TRAF4 and MEKK4 are expressed at low levels in most adult mouse tissues, whereas expression is strong throughout embryogenesis with particularly high levels in the developing neuroepithelium (5,7,8). Significantly, TRAF4 and MEKK4 coprecipitated in lysates prepared from E10.5 embryos showing the association of MEKK4 and TRAF4 during development (Fig. 1B). These data demonstrate an in vivo association of TRAF4 and MEKK4 and suggests a role for this interaction in MEKK4 and TRAF4 function. Analysis of TRAF4 and MEKK4 Interaction Domains-Analysis of the domains of MEKK4 that bind full-length TRAF4 revealed that the N terminus of MEKK4 did not bind TRAF4 (data not shown). The kinase domain alone of MEKK4 was sufficient for the association of MEKK4 with TRAF4 ( Fig. 2A). However, binding of TRAF4 to MEKK4 was not dependent on MEKK4 kinase activity as TRAF4 coimmunoprecipitated equally with FLAG-tagged wild-type MEKK4 or a kinase-inactive MEKK4 where the active site lysine is substituted with an arginine (MEKK4 K1361R ) (Fig. 2B). The interaction domains of TRAF4 with full-length MEKK4 were similarly examined. In contrast to the N terminus of TRAF4 that failed to bind MEKK4, the TRAF domain of TRAF4 was necessary and sufficient for binding to MEKK4 (Fig. 2C). Consistent with the lack of binding of MEKK4 to the N terminus of TRAF4, MEKK4 retained the ability to bind a mutant TRAF4 wherein the RING domain in the N terminus was deleted (Fig. 2C). Together, these data show the specific interaction of the TRAF domain of TRAF4 with the kinase domain of MEKK4. Colocalization of MEKK4 and TRAF4 in the Cytoplasm of COS-7 Cells- Previous reports have yielded conflicting results regarding the localization of TRAF4 within the cell. Some studies have shown a nuclear localization of TRAF4, whereas others have shown that TRAF4 localizes specifically to the cytoplasm (12,13). MEKK4 has been shown to localize to the cytoplasm and to perinuclear, Golgi-associated vesicles (14). To determine whether TRAF4 and MEKK4 localized to similar regions within the cell, COS-7 cells were cotransfected with TRAF4 and MEKK4, and the localization of these proteins was examined by the immunofluroescence of deconvolved 0.1 M sections. As shown in Fig. 3, MEKK4 and TRAF4 colocalized in the cytoplasm of COS-7 cells with pronounced staining in the perinuclear region of the cell. Immunostaining for TRAF4 and MEKK4 was not detected in the nuclei of cells as assessed by lack of colocalization with the nuclear stain DAPI (Fig. 3). The colocalization of TRAF4 and MEKK4 is consistent with their ability to coprecipitate from cell lysates. TRAF4 Promotes MEKK4 Activation of the JNK Pathway-Similar to previous findings, expression of TRAF4 weakly activates JNK, and MEKK4 expressed alone modestly activates JNK producing only a 3-fold increase in JNK relative to basal (Fig. 4, A and B) (9, 10). However, coexpression of TRAF4 and MEKK4 resulted in synergistic activation of JNK producing a 7-fold induction in phosphorylation of JNK (Fig. 4, A and B). JNK activation by coexpressed TRAF4 and MEKK4 was markedly inhibited by the co-expression of MEKK4 K1361R (Fig. 4C). Additionally, the TRAF domain of TRAF4, the site that binds MEKK4, was able to inhibit activation of JNK by MEKK4 (Fig. 4D). Interestingly, TRAF4 does not promote MEKK4 activation of p38 (data not shown). Furthermore, the TRAF domain of TRAF4 inhibits JNK, but not p38, activation by MEKK4 indicating that TRAF4/MEKK4 complexes are specific for JNK signaling (data not shown). TRAF4 Increases MEKK4 Kinase Activity by Promoting Dimerization of MEKK4-In vitro kinase assays demonstrate the ability of TRAF4 to increase MEKK4 kinase activity. Coexpression of TRAF4 with MEKK4 resulted in a 2.2-fold stimulation of MEKK4 phosphorylation of purified His-MKK6 as compared with expression of MEKK4 alone (Fig. 5A). One mechanism to explain the ability of TRAF4 to increase MEKK4 kinase activity and MEKK4dependent JNK activation is through the oligomerization of MEKK4 by TRAF4. This mechanism would also explain the ability of the kinase-inactive MEKK4 to block activation of JNK by TRAF4 and MEKK4. To test this hypothesis, we examined the association of wild-type HA-tagged MEKK4 with FLAGtagged MEKK4 K1361R and found that wild-type MEKK4 coprecipitated weakly with the kinase-inactive MEKK4 K1361R and that coprecipitation of wild-type and kinase-inactive MEKK4 K1361R is significantly enhanced by the presence of TRAF4 (Fig. 5B). To further ascertain the ability of dimerization of MEKK4 to promote signaling to JNK, we utilized a cell permeable synthetic dimerizer AP20187. The kinase domain of MEKK4 was fused to two FKBP domains containing a point mutation Phe 36 3 Val (F V ). AP20187 binds to F V with 1000ϫ greater affinity than for endogenous FKBP (11). Cells expressing HA-JNK alone displayed a minimal response to AP20187 (Fig. 5C). The addition of the chemical dimerizer AP20187 to cells expressing both HA-F V -MEKK4 and HA-JNK resulted in significant induction of JNK activation as measured by in vitro phosphorylation of GST-c-Jun by HA-JNK (Fig. 5C). Together, these data demonstrate the ability of TRAF4 to induce the oligomerization of MEKK4 and promote MEKK4 activation and show that dimerization is sufficient to induce MEKK4-dependent JNK activation. Cumulatively, our results show that the TRAF domain of TRAF4 associates with the kinase domain of MEKK4 and promotes MEKK4 activation and signaling to JNK. The kinase-inactive MEKK4 K1361R behaves as a dominant negative inhibitory mutant in TRAF4 complexes, inhibiting the activation of wildtype MEKK4. Despite intensive investigation, upstream receptor pathways that regulate TRAF4 have not been identified. Several lines of evidence suggest that TRAF4 is an atypical member of the TRAF family. Unlike TRAF1 transgenics, and TRAF2, TRAF3, TRAF5 and TRAF6 knock-outs that each display immune deficiencies, TRAF4 knock-outs have normal immune systems (3,4,15). Instead, TRAF4 knock-outs display severe open neural tube and skeletal defects similar to the MEKK4 knock-out and the MEKK4 K1361R knock-in (4,5,8). Other TRAFs have been shown to bind several members of the TNF recep- tor superfamily (3). In vitro binding experiments have shown that TRAF4 interacts weakly with the LT␤R and p75 nerve growth factor receptor but not at all with TNFR1, TNFR2, Fas, or CD40 (16 -18). Furthermore, TRAF4 shows highest homology with the Drosophila TRAF (DTRAF1) with 45% amino acid identity, suggesting that TRAF4 may represent an archaic member of the TRAF family (1). Both DTRAF1 and TRAF4 have seven zinc fingers (as compared with five in TRAF2, TRAF3, TRAF5, and TRAF6) and have a truncated coiled-coil domain (1,19). DTRAF1 has also been shown to activate JNK via a mechanism involving binding to the MAP4K Misshapen (19). The similarity of TRAF4 to DTRAF1 and the significant differences from other mammalian TRAF family members suggest that regulation of TRAF4, the non-classical mammalian TRAF, may be functionally different from other TRAF proteins (6). Although we do not know the upstream stimuli for initiating TRAF4 activation of MEKK4 during development, we show that MEKK4 and TRAF4 associate in the developing embryo. The existence of an endogenous complex, the coregulation of the JNK signaling pathway, and the overlapping phenotypes of the TRAF4 knock-out and the MEKK4 knock-out and MEKK4 K1361R knock-in demonstrate the relevance of the TRAF4/MEKK4 interaction and suggest that TRAF4 and MEKK4 are in a common pathway (4,8,20). TRAF4 effectors have been elusive, and our work defines MEKK4 as the first signaling protein shown to be an effector for TRAF4 regulation of its kinase activity.
3,361.4
2005-10-28T00:00:00.000
[ "Biology", "Computer Science" ]
Beginnings of Developing Kinetic Scenarios of Plasma Evolution due to Coulomb Collisions A new logic of reducing the two-time formalism to a highly informative scenario of redistribution of plasma particles in momentum due to Coulomb collisions is reported. Based on objective plasma evolution equations following from a properly reduced full plasma description, it has a more sound foundation than that presented in the previous report on increasing the informativeness of scenarios of the phenomenon. The possibilities of adapting the approach to the further development of more informative scenarios of plasma collisional relaxation and the modelling of transport phenomena are discussed. Introduction In physics research, evolving systems are of considerable interest to theorists. Note that even a momentary change in a factor affecting a certain physical system cannot cause an instantaneous transition of the system from an initial state to some final one: the transition from the first to the second state takes some time and, thus, involves the evolution of the system. Meanwhile, the final state itself is often determined by the subtleties of the system evolution scenario. In view of this, the development of evolution scenarios is of particular importance in the above-mentioned studies of evolving systems. In turn, its importance has led to the emergence of the concept of the informativeness of physical theoretical scenarios of system evolution: the longer a scenario adequately depicting the real picture of the macrophysical evolution of a system, the higher the informativeness of the scenario [1][2][3][4][5][6]. Regarding plasma science, we have found that the bulk of the kinetic scenarios of plasma phenomena generated by predecessors have inappropriately low informativeness due to the lack of an adequate understanding of two important aspects. The first of them is the asymptotic nature of the convergence of successive approximations to a plasma scenario. As a matter of fact, modelling plasma kinetics involves reducing the full plasma description that is given by the simultaneous Maxwell [7] and Klimontovich [8,9]-Dupree [10] equations to a simpler kinetic plasma model. In view of this, the researcher inevitably loses the ephemeral possibility of adequately modelling the plasma evolution during an infinite time interval, since any of the possible schemes of this reduction imply a significant reduction in the informational basis of the theory. Further, the reduction inevitably involves the generation of some nonlinear perturbation theory. In any such theory, an increase in the order of consideration entails a factorial increase in the number of terms considered. Consequently, after a certain order of consideration, an improvement in the accuracy of the scenario (which is the only reason for the increase in the order of consideration) will inevitably be replaced by a reduction in accuracy: successive iterations begin to diverge due to a growth in the number of terms. It is this feature of the behaviour of successive approximations of perturbation theory that is termed "asymptotic convergence". Increasing the order of consideration within the framework of the corresponding perturbation theory, the researcher can equally rigorously develop different scenarios of the plasma evolution from a specified initial state by referring to different versions of the leading-order approximation that correspond to this initial state. The point is that, when one takes different lowest-order approximations of the employed nonlinear perturbation theory, its first sequential orders converge to different conditional limits that correspond to different theoretical scenarios of the plasma macrophysical evolution. The second aspect that was ignored by predecessors is that, with the traditional replacement of real plasmas by plasma ensembles, the theory loses a lot of its informativeness. Indeed, conclusions about the mutual influence of the statistics of the ensemble strongly depend on the composition of the ensemble; therefore, they cannot be treated as objective laws of the physical evolution of the system. Additionally, note that the above two aspects are inseparable of each other: the first implies the second and vice versa [3,4]. Having realised the reasons for the theory non-informativeness, we have formulated principles that help to reduce full plasma descriptions to kinetic models of plasma evolution with as high informativeness as possible. These principles are rejecting the traditional plasma ensemble averaging and the direct time integration of intermediate evolution equations that appear when reducing the full plasma description. Bearing them in mind, we have developed a technique that is suitable for studying phenomena in a weakly turbulent plasma. (In view of the purpose of the corresponding theoretical apparatus, it is most natural to refer to it as a highly informative correlation analysis of plasma kinetics.) Various aspects of this technique were discussed in References [3][4][5][6][11][12][13][14][15][16]. Some stages of its development are reflected in books [1,2]. Our recent contributions to the field are papers on drift in ordinary space and wavenumbers of non-potential plasma waves in an inhomogeneous plasma [6,17] and on the role of forced plasma oscillations that accompany such a drift [18]. Undoubtedly, weakly turbulent plasma phenomena are not the only ones for which it is possible to develop somewhat informative theoretical plasma scenarios. In general, the identification of plasma contexts that have this property and the development of tools for generating informative scenarios in respective physical situations should be an extremely important component of theoretical plasma research. We stress that there is no universal calculation technique: each plasma problem dictates its own logic of developing possibly more informative plasma kinetic scenarios. The simplest illustration of this thesis is the problem of thermodynamic relaxation of a weakly non-ideal nonequilibrium homogeneous plasma. (A plasma is considered to be weakly non-ideal if the number of particles in the Debye sphere N D = n e r 3 D notably exceeds unity: the smaller N D , the less ideal the plasma. Here n e is the density of plasma electrons and r D is the plasma Debye length.) In such a plasma, the particles undergo Coulomb collisions, which lead to plasma thermalisation. For this process, as for the phenomena in a collisionless weakly turbulent plasma, it is possible to develop informative scenarios. At the same time, the machinery for developing highly informative kinetic models of evolving weakly turbulent plasma is useless here. Informative kinetic scenarios of plasma evolution due to Coulomb collisions can be important, say, for research on nuclear fusion. With this in mind, we have performed a study that aimed at creating suitable tools for developing relevant informative scenarios. A report on this study is currently under consideration for publication in Contributions to Plasma Physics [19]. We note that the leading order of our kinetic scenario of the plasma evolution due to Coulomb collisions of particles in homogeneous plasmas turned out to be consistent with the well-known Lenard [20]-Balescu [21] plasma kinetic equation. It should be stressed that Lenard and Balescu both considered this phenomenon within the plasma ensemble paradigm. The focus on probabilistic plasma ensembles predetermined the sets of ideas underlying the logics of their intermediate calculations. Obviously, a further extension of these logics turned out to be technically unfeasible: the history of publications on modelling the evolution of plasma due to Coulomb collisions does not contain reports on the development of kinetic equations that are superior in accuracy to the Lenard-Balescu equation. In the above-mentioned research report, our goal was to take the effect of temporal changes in plasma distributions on the plasma evolution scenario into account. This goal was achieved on the basis of our two-time formalism [17] by an auxiliary integration of some formal equation that "governs" time advances of a formal function that is somehow similar to the two-particle distribution function in the concept of plasma kinetics after Bogoluibov [22]-Born-Green [23]-Kirkwood [24]-Yvon [25] (BBGKY kinetics). This integration helped to guess the structure of the real (well defined) function, the two-point correlation function, that governs time advances of one-particle distribution functions. The proposed function turned out to satisfy the natural equation of its evolution. Still, we have some internal dissatisfaction, since our key function of the theory only echoes, in its structure, with the result of the auxiliary calculation. On the one hand, the lack of full correspondence between this function and the result of the formal calculation is quite natural, since the formal function used in the calculation does not reflect the objective physical relations that are pertinent to the plasma under consideration. On the other hand, there should exist a direct derivation of the above key function based on the integration of the natural equation of the plasma evolution, whereas we had not succeeded in deriving it. In this paper, we report a solution to the latter problem: we present a reasoning approach that is based on a full-scale two-time formalism with integration of the natural evolution equation of the two-point correlation function that strictly justifies the structure of the function. Our new study will be presented in this paper in accordance with the following plan. In the next section, we briefly outline the principals of the two-time formalism. Section 3 uncovers the logic of consideration in paper [19] and it presents a slightly more clarified version of the two-point correlation function that has been developed according to this logic. Section 4 describes a completely new way of looking at the problem. The results of the study are commented on in the Conclusions. Key Equations of the Two-Time Formalism Stating the need to refrain from ensemble averaging, the theorist is faced with the problem of considering the full plasma description, the integration of which involves the simultaneous integration of the equations of motion for all individual charged plasma particles. On the one hand, such an integration presupposes the knowledge of the initial data on the position and momentum of each charged particle, which are never known. On the other hand, even if it was known, the task of simultaneously integrating infinitely many equations of motion is technically infeasible. This inevitably forces one to construct a Vlasov type statistic of particle distribution f α ( r, p, t) [26] from the Klimontovich's distribution N α = ∑ n δ 3 ( r − r n (t))δ 3 ( p − p n (t)) [8,9]: it was the construction of such statistics that was a consequence of the use of plasma ensemble averaging by our predecessors. (In the Klimontovich distribution, which we call the microdistribution, the subscript n numbers the particles of species α, and the functions r n (t) and p n (t) describe the trajectories of individual particles.) In the corresponding functional role, averaging over a plasma ensemble can only be replaced by contextually oriented averaging over a sufficiently large neighbourhood of the current point ( r, p) of the R ⊗ P phase space. (Here the sign ⊗ denotes the product of two spaces. In what follows, it will also denote the direct tensor product). In this paper, we consider the effect of Coulomb collisions in a macroscopically homogeneous plasma. For this, it is natural to use as the above neighbourhoods uniform six-dimensional parallelepipeds with sufficiently small momentum dimensions and sufficiently large spatial dimensions. As a result, the bulk of the averaged microdistribution f α turns in the sense of mathematics into a well-defined statistic that is independent of r; its transformations in time follow the equation (Here, and in what follows, we will use the notation of tensor analysis, although no freedom to change the reference frame is implied: co-and contravariant indices are distinguished for an unambiguous interpretation of formulae). The equation is written under the usual assumption that the potential part only of microstructural electric fields in the plasma is important in Coulomb collisions. The angle brackets denote averaging over the abovementioned parallelepiped-shaped neighbourhood of the product of the microstructural part of the microdistribution, δN α ( r, p, t) = N α ( r, p, t) − f α ( r, p, t), and the corresponding microstructural part E( r, t) of the electric field. The right-hand side of Equation (1) contains particular data on the two-point correlation function, δN α ( r, p, t) E( r , t ) . Here, we have intentionally split the time and spatial variables of the two objects under the averaging sign to simplify the calculation process. The notation indicates that the arguments r and p of δN α run through the neighbourhood in the phase space that was chosen to construct f α at the current point ( r, p), and the spatial arguments of E( r , t ) vary synchronously with the spatial argument of δN α : the difference between these arguments is fixed in the averaging. (In this regard, our concept of the twopoint correlation function differs substantially from the concepts of two-point functions that can be encountered in many papers that rely on traditional approaches. Particularly, we can point to the concept of phasestrophy (otherwise the two-point phase space density correlation [27]) mentioned in Reference [28]. Although the concept of the latter object involves some averaging in phase space (called averaging over the spatial coordinate of the centre of mass), it was defined as the technical implementation of averaging over an ensemble of plasmas. The reference to this object is motivated by recalling exotic objects, such as convective cells [29] and phase space density granulations [30][31][32]. Meanwhile, had these or any other exotic objects of the traditional plasma theory been of any importance for plasma physical manifestations, then they would have contributed to our two-point correlation function, and their cumulative effect on plasma manifestations would have been adequately described by the equations of two-time formalism provided that the corresponding iterative procedure shows some asymptotic convergence.) Hence the system of spatial arguments in our approach is somewhat redundant. (For more detailed comments on this issue, see References [6,17].) However, this system is very convenient due to its internal symmetry. The second equation of the two-time formalism is either the evolution equation for the two-point correlation function or the evolution equation for the two-time correlation function Φ Φ Φ( r, t, r , t ) (since the magnetic microfields generated by plasma particles can be neglected in the problem under consideration, the last function can be defined as Φ Φ Φ( r, t, r , t ) = E( r, t) ⊗ E( r , t ) ). For modelling the kinetics of plasma collisional relaxation, the use of the first equation is more reasonable. (The other equation, the evolution equation for the two-time correlation function, has been widely used in our previous studies of phenomena in weakly turbulent plasma.) A number of theoretical objects are required to represent this equation. The first is the bare Green function of particles of a given species α, 0 G α ( r, p, t, r , p , t ). It satisfies the causality principle: at t < t , the function is identically zero. In the domain t > t , the function under study evolves according to the equation with the initial data The second object is the electromagnetic Green function F F F , which conceptually corresponds to the well-known delayed potentials. This function is a definite integro-differential operator that gives an expression for the electromagnetic field (EMF) tensor in terms of the charge density. That is, in the absence of external electromagnetic radiation, the EMF tensor in the plasma is The explicit form of F F F is not needed for our considerations. An appropriately accurate equation for the evolution of the two-point correlation function δN α ( r, p, t) E( r , t ) was developed in [33], where its full-scale presentation (and derivation) was given in graphic form. We do not need to repeat this for the sake of the current study. We only note that, in Reference [33], this equation is written in Figure 15 after the graphical notation that was introduced in the paper. The equation governs the advances of δN α ( r, p, t) E( r , t ) with an increase in its entry time t. According to the causality principle, the equation is only consistent for t t . That is, had we known the initial value δN α ( r, p, t ) E( r , t ) , we could have integrated the equation to calculate the function at larger entry times t > t . The above-mentioned equation from [33] has sufficient accuracy for the proper modeling of the plasma kinetics in the next to the leading order of the expansion in N −1 D . The researcher can easily develop more precise analogs of this equation: the principles of corresponding derivations are set out in Reference [34]. Increasing the order of the corresponding derivations makes sense up to the order ∝ N D : higher orders diverge due to the asymptotic nature of the convergence of the theory. Preliminary Explanations Following Equation (1), we need the feature δN α ( r, p, t) E( r, t) to determine the rate of change of the distribution function. In order to obtain it, we will develop a twopoint correlation function δN α ( r, p, t) E( r , t ) as a natural solution to the corresponding evolution equation. In fact, we will concentrate our efforts on developing an approximation of the function that evolves following only the linear version of this equation, i.e., the one that is based on the terms containing a single wavy line in Figure 15 of paper [33]. This time, our goal will be to properly account for the terms that are due to the next to the next to the leading order of δN α ( r, p, t) E( r , t ) in its expansion in the ratio of the inverse plasma frequency to the characteristic time scale of the plasma evolution. Note that in a quiet plasma that evolves exclusively due to Coulomb collisions, the ratio of inverse plasma frequency to the characteristic time scale of the plasma evolution is of the order 1/N D . Correspondingly, we will develop a solution to the above linearised evolution equation up to corrections scaled as the leading-order function multiplied by 1/N 2 D . The nonlinear corrections due to the terms that are represented by three wavy lines from the more precise version of the equation depicted in the above-mentioned figure have the same order. (The reader can develop the corresponding version using the ideas outlined in Reference [34]. An appropriate starting point for the prospective development of such a version is Figure 4 in that paper.) These corrections can be developed independently, and their proper versions will be obtained via iterations on the basis of the above-mentioned approximation of the two-point correlation function (we mean the versions of the nonlinear corrections that are most consistent with the motivation for the development of a plasma kinetic scenario with maximum informativeness). We will not discuss the corresponding iterations in the present paper. Our approximate equation for the evolution of the two-point correlation function takes the analytical form Here, a key property of two-time functions is that they have rather sharp dependencies on the difference between their entry and exit time variables, whereas their dependencies on the half-sum of these variables are rather smooth. That is, the characteristic time scales of the first dependencies are of the order of the inverse electron plasma frequency ω −1 pe , whereas the time scales of the dependencies on the half-sum of the entry and exit time variables are of the order of the lowest of the inverse collision frequencies of plasma particles, which is the inverse frequency of electron-electron Coulomb collisions ν −1 ee (the smallness of ν ee as compared to ω pe is equivalent to N −1 D 1). A second property of these functions is that they only depend on the difference between the entry and exit spatial variables because of the macroscopic homogeneity of the plasma. Therefore, we will use the Fourier transform in space according to the conventions This yields The two-time correlation function satisfies the Poisson equation: Thus, Equation (5) is conceptually an equality that contains some integral operator acting on the two-point correlation function δN α E γ k ( p, t, t ). Using this equation, one can express the two-point correlation function The equation cannot be integrated for t < t : recall the causality principle. Meanwhile, the function δN α E γ k ( p, t, t ) is the spatial Fourier transform of a statistic that is well defined for any sequence of its entry and exit temporal arguments. Moreover, the more accurate evolution equation that is presented in Figure 15 from Reference [33] shows that the advances in time of the two-point correlation function δN α E γ k ( p, t, t ) depend not only on the data of the function with an entry time exceeding the exit time, but also on the data of the function with the reverse order of these times. That is, we should develop an approximation of the two-time correlation function δN α E γ k ( p, t, t ) for an arbitrary sequence of its entry and exit time arguments. The idea of this approximation was put forward in Reference [19] based on auxiliary integration of a formal equation defining the time advances of the formal function δN α δN α k ( p, t, p , t ). We stress that this function is not a statistic. Its spatial original (inverse Fourier transform) is represented as the average where the difference p − p is fixed similarly to the difference r − r while r and p run through the neighbourhood selected to define the distribution function f α at current point r, p. This is an extremely rugged function containing spikes that are scattered in the phase space, depending on the details of the microdistribution. The "evolution equation" itself is written as Using the Laplace transform and its inversion, we have expressed δN α δN α k ( p, t, p , t 0 ) for an entry time t that satisfies the constraint ω pe −1 t − t 0 (ν ee ) −1 through the data δN α δN α k ( p, t 0 , p , t 0 ). (Note: the effective expression of this data is 3 . It is rather rough: it does not provide an understanding of whether the plasma is drastically dynamic or whether it has reached a rather quiet state and evolves slowly further on. This "roughness" of the data is just the reason for the persistence of some uncertainty as a result of the formal calculation). Subsequently, we have used the Poisson equation to form the initial data of the two-point . After that, we have applied the Laplace transform to the Equation (5) and obtained some expression for the two-point The time delays t − t 0 and t − t 0 satisfying the above constraints are quite sufficient for the transition of the plasma to a relatively quiet state, even with an exotic initial arrangement of its particles in space (when the plasma is in a notably nonequilibrium state) and, at the same time, these time delays are too small for some significant changes in the distributions of plasma particles f α to occur. This ultimately yielded some formal expression for the two-point correlation function δN α E γ k ( p, t, t ) that depended on three time variables: the entry time t, the exit time t , and the variable t 0 , which stands for a somewhat arbitrary previous moment. Here, we will not present the expression itself. Instead, we present the result of a more accurate calculation following the same principles. By presenting the one, we pursue two goals. The first is to obtain a clearer understanding of the structure of the two-point correlation function, and the second is to further illustrate the consistency of the new logic of developing the two-point correlation function, which is discussed in the next section. The new expression of the two-point correlation function is Here, is the linear dielectric function of the plasma, and we have also introduced notation The function G α kω (t , t 0 ) is well defined on the axis of real ω, but we intend to manipulate with its analytical continuation at least to a narrow strip in the plane of complex ω around the line Im ω = 0. The variable ω on the right-hand side of relation (8) is also implied to be a complex one. The line of integration over this variable goes from the left to right and it passes over all singularities of the integrand (note that the only important singularity of the integrand is the one due to the resonance ω = k · v ). Note that the variable t enters the expression δN α E γ k ( p, t, t ) exclusively within the argument of the exponent: all other components of the expression depend on t and t 0 . The temporal variable t 0 stands for a somewhat arbitrary previous moment. Its presence in the expression is just due to the fact that corresponding calculation is purely formal. We have concluded that one should set t 0 = t in the case t > t and t 0 = t in the case t < t to obtain a reliable approximation of the two-point correlation function. The discussion shown in the next section will provide additional support for the specified value assignments to the variable t 0 . Note that the above-mentioned formal calculation leading to relation (8) is a tedious task. For those who wish to repeat it, in Appendix A we list the basic mathematical relations that we used in this calculation. Determining the Structure of the Two-Point Correlation Function from First Principles We assume that the initial data of the two-point correlation function δN α E γ k ( p, t , t ) is known. Using the Laplace transform technique, we can integrate Equation (5) and obtain its solution for the time domain ω −1 pe t − t ν −1 ee . In this domain, the contributions to the inverse Laplace transform of the poles due to the zeros of the linear dielectric function ε kω (t ) are exponentially damped, which allows for obtaining an easily structured result. However, we will not follow the corresponding calculation program straightforwardly. Our calculations will include two stages. We will first develop an expression for the twotime correlation function Φ k Φ k Φ k (t, t ), and then derive a two-point correlation function that generates the last expression. Two-Time Correlation Function In view of the Poisson Equation (6), the two-time correlation function is nothing else than the integral of contributions from groups of particles that differ in momentum, and each contribution can be calculated well while ignoring the zeros of the dielectric function. Accordingly, it is possible to change the sequence of integration in momentum and frequency after the formation of the integral based on inverse Laplace transforms for differing groups of particles. That is, we can form the Laplace transform of Φ k Φ k Φ k (t, t ) and ignore the exponentially damped contributions due to the zeros of the dielectric function when inverting this Laplace transform. The Laplace transforms of two-time functions will be introduced according to the rule and the inverse Laplace transforms, according to Here, C denotes the line in the complex plane ω that goes from left to right and passes over all singularities of the integrand. After the Laplace transform, Equation (5) takes the form Note: the solenoidal components of microfields in the plasma are assumed to be negligible, at least in terms of their contribution to the two-time correlation function. Therefore, the following equality is valid: Similarly, This makes it possible to simplify notation and, hence, the calculus. Namely, we denote This yields the following form of Equation (11) The symbol ∇ p denotes the derivative over the momentum p. Solving the equation for F α kω ( p, t ) and substituting the result into the Poisson equation, we obtain The solution to this new equation can be constructed by iterations. We have: Obviosuly, all of the components of h contain the summation over α with the weight 4πe α and the p -integration of F α k ( p, t , t ). Bearing this in mind, we only control the dependence of these components on ω: An equivalent form of the right-hand side here is Performing the inversion of the Laplace transform, we find that this form leads to the following structure Hence, for the time domain ω −1 pe t − t ν −1 ee , we have developed: Note: regardless of whether the plasma was initially drastically dynamic or not, we assume that, by the time t = t , it has acquired a quiet state with a rather large time scale of further evolution, which is of the order of the inverse electron-electron collision frequency ν −1 ee . Therefore, expression (13) should be reliable not only at t − t ω −1 pe , but also at smaller times t, up to t = t . Similarly, at times −ν −1 ee t − t < 0, the two-time correlation function should comply with the formula Naturally, when the time variable t passes through t = t , no discontinuity in the twotime correlation function should be observed. That is, with the growth of t, the two-time correlation function following Equation (14) is transformed after t = t into the one that corresponds to Equation (13). Accordingly, a compound form that fits the entire time where Q α is some function that only depends on the momentum p and time, and we have introduced the variable t 0 to denote the last dependence. This variable stands for the argument t for t > t and the argument t for t < t . The meaning of the function Q α can be stated, as follows. Setting t = t , we can directly calculate the two-time correlation function Φ Φ Φ( r, t, r , t ) for the case where the displacement | r − r | is small when compared to the mean interparticle distance in the plasma. At corresponding distances, the influence of neighbours on the field around a plasma particle can be neglected, and the function becomes the sum of the equal terms standing for individual charged particles: We can now modify this result to take plasma evolution into account on a time scale that is much shorter than the time of particle free flight through the mean interparticle distance. This time depends on electrons and, since the typical velocity of bulk electrons is of the order of the electron thermal velocity v Te , we obtain Using this formula, we can obtain the data of the spatial Fourier transform of the function for large wavenumbers at small time delays (k (n 0 ) 1/3 , 0 < t − t 1/(kv Te )) [35]: Naturally, the right-hand side here should satisfy Equation (15). Hence, there should be (We comment that the dielectric function ε k( k · v) is indistinguishable from unity for the above large wave vectors.) Two-Point Correlation Function In this subsection, our goal is to invent a procedure that allows us to reverse the relation that is given by the Poisson Equation (6), i.e., to develop a unique two-point correlation function δN α E γ k ( p, t, t ) that, in each situation, generates the given two-time correlation function. For the class of evolving plasmas of our interest, i.e., those that have reached a relatively quiet state at the moment under consideration, the latter problem has a logical solution. We take a finalized expression for the two-time correlation function, (Here, we have replaced the integration variable p and the type of particles under the sign of summation α with p and α , respectively). Its entry time variable should be the entry time variable for the desired two-point correlation function δN α E γ k ( p, t, t ). We separate the structure in the pre-exponent of the integrand that depends on t 0 and t . In addition, in this structure, we will explicitly distinguish the dependence on k · v . The latter presupposes the use of the notation (10). This allows us to write We now expand the remaining part of the pre-exponent in powers of t − t up to the next to the next leading order: Integrating the right-hand side by parts, we reduce it to the form The last transformation in this chain of equations was carried out, as follows. In the integrand, we first separated the part of the pre-exponent that contains the second-order time derivative of ε kω −1 . The fact that the term with this derivative enters the expression without any differentiation over ω indicates that the previous calculation was correct. (This is in our order of consideration. Desiring to generate higher orders, one should make sure that in the above-mentioned pre-exponent in the term with the highest-order time derivative of the function ε kω −1 , this function is undifferentiated over ω.) Subsequently, we separated those terms with the first-order time derivative of ε kω −1 in which this derivative was not differentiated with respect to ω (the terms with both time-and ω-derivatives of ε kω −1 may well be present in the multipliers of these terms). Finally, terms remained that contained ε kω −1 without any differentiations. Now consider the last steps. In the developed formula, we make the following substitutions: 1 ε kω (t ) ∂ ∂t platform will form a natural basis for modelling plasma transport phenomena for classical high-temperature plasma. Thus, a basis is created for the development of more informative kinetic scenarios of the plasma evolution due to Coulomb collisions as compared to the previously formulated scenarios of this evolution. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest. Appendix A. Mathematical Tricks Used in the Formal Calculation Following the Logics of a Preceding Study The main difficulty in the formal calculation of the two-point correlation function δN α E γ k ( p, t, t ) is to express it in terms of its "initial data" δN α E γ k ( p, t 0 , t ) where ω −1 pe t − t 0 ν −1 ee . Using the Laplace transform to integrate the corresponding evolution equation, we find two terms in the inverse Laplace transform. The first term is , and the second is the integral over ω, in which the integrand contains the integral over the dummy momentum p . The discussion below will focus solely on this integral. A significant contributions to this integral come from the poles originating from the multipliers 1/ ω − k · v and 1/ ω − k · v n . In the intended order of expansion in powers of the ratio of the inverse plasma frequency to the characteristic time of the plasma evolution, the exponent n takes values n = 1 − 3. That is, under the sign of integration over ω, we find structures of the type To express the contributions to the inverse Laplace transform due to these structures, we substitute them into the integrand according to the following presentations: Correctly approximating the inverse Laplace transform (with integration over ω by parts, if necessary), we obtain an integrand in the integral over dummy p that has no singularities at k · v = k · v . This makes it possible to treat the integral over k · v as a principal value integral. The initial data δN α E γ k ( p , t 0 , t ) in the p -integral is proportional to exp −i k · v (t 0 − t ) . The integration over ω provides either the exponent exp −i k · v (t − t 0 ) or the exponent exp −i k · v (t − t 0 ) , depending on the summand. Accordingly, under the sign of integration over k · v , we find terms with smooth functions of this variable that are multiplied by either exp i k · v − k · v (t − t 0 ) or exp −i k · v − k · v (t − t ) . Due to the natural structure of the distribution function, the p -integrals of these terms converge absolutely. In terms of the first type, the exponent is rapidly oscillating, which makes it possible to rearrange their integrands via the use of the following simplifying substitutions:
8,587.6
2021-04-23T00:00:00.000
[ "Physics" ]
Charge Compensation Mechanism of a Na+-coupled, Secondary Active Glutamate Transporter* Background: Reorientation of the binding sites of the glutamate transporter requires K+ translocation. Results: Single turnover K+ translocation is associated with negative transmembrane charge movement. Conclusion: The empty glutamate transporter carries an apparent charge of −1.23, overcompensating for the positive charge of the translocated K+ ion. Significance: Charge compensation may be a general strategy of Na+-dependent transporters to overcome electrostatic barriers of charge transport. Forward glutamate transport by the excitatory amino acid carrier EAAC1 is coupled to the inward movement of three Na+ and one proton and the subsequent outward movement of one K+ in a separate step. Based on indirect evidence, it was speculated that the cation binding sites bear a negative charge. However, little is known about the electrostatics of the transport process. Valences calculated using the Poisson-Boltzmann equation indicate that negative charge is transferred across the membrane when only one cation is bound. Consistently, transient currents were observed in response to voltage jumps when K+ was the only cation on both sides of the membrane. Furthermore, rapid extracellular K+ application to EAAC1 under single turnover conditions (K+ inside) resulted in outward transient current. We propose a charge compensation mechanism, in which the C-terminal transport domain bears an overall negative charge of −1.23. Charge compensation, together with distribution of charge movement over many steps in the transport cycle, as well as defocusing of the membrane electric field, may be combined strategies used by Na+-coupled transporters to avoid prohibitive activation barriers for charge translocation. Forward glutamate transport by the excitatory amino acid carrier EAAC1 is coupled to the inward movement of three Na ؉ and one proton and the subsequent outward movement of one K ؉ in a separate step. Based on indirect evidence, it was speculated that the cation binding sites bear a negative charge. However, little is known about the electrostatics of the transport process. Valences calculated using the Poisson-Boltzmann equation indicate that negative charge is transferred across the membrane when only one cation is bound. Consistently, transient currents were observed in response to voltage jumps when K ؉ was the only cation on both sides of the membrane. Furthermore, rapid extracellular K ؉ application to EAAC1 under single turnover conditions (K ؉ inside) resulted in outward transient current. We propose a charge compensation mechanism, in which the C-terminal transport domain bears an overall negative charge of ؊1. 23. Charge compensation, together with distribution of charge movement over many steps in the transport cycle, as well as defocusing of the membrane electric field, may be combined strategies used by Na ؉ -coupled transporters to avoid prohibitive activation barriers for charge translocation. Glutamate transport by the members of the SLC1 family (1,2), as well as secondary active transport by other solute carriers (3), is thought to occur through an alternating access mechanism (4). Such mechanisms assume that the transporter cycles through at least two discrete conformational states, one of them allowing access of substrate to its binding site from the extracellular side and the other one allowing access from the cytoplasm. Glutamate and 3 Na ϩ ions, when bound to the transporter at the same time, initiate the conformational change(s) associated with alternating accessibility. Based on recent crystallographic and computational evidence, it was hypothesized that alternating accessibility is mediated by sequential movement of an external gate (reentrant loop 2 (5-7)) and an internal gate (reentrant loop 1 (2,8)). In addition to the opening and closing of gates, glutamate transport is thought to be associated with large scale, rigid body conformational changes (9), one of them being the movement of the C-terminal transport domain that leads to the translocation of glutamate along the bilayer normal (1,2). This movement has been described in terms of a hydrophobic interaction mechanism, in which the trimerization domain provides an unstructured, hydrophobic surface, along which the transport domain can move inward and outward (2). Due to the large number of potentially charged residues that are moved in the transport process, it is likely that in addition to the hydrophobic effect, electrostatics play an important role. Because the movement of 3 Na ϩ ions across the hydrophobic barrier of the membrane is expected to be unfavorable, it has been suggested that the positive charge of the cations is at least partially compensated for by negative charge of the binding site(s) (10 -12). Consistent with this suggestion, several negatively charged amino acid residues, which are highly conserved within the SLC1 family and sensitive to mutation, are located in the C-terminal transport domain (13)(14)(15)(16). K ϩ also initiates alternating accessibility in a step separable from Na ϩ /glutamate movement (K ϩ countertransport (10,17,18)). Based on indirect evidence from the voltage dependence of steady-state glutamate-induced transport currents (10,19), as well as measurements on fluorescently labeled transporters (11), it was speculated that the K ϩ relocation step(s) is associated with net negative charge movement, despite the positive charge of the transported K ϩ ion. However, no direct experimental evidence for the voltage dependence of the K ϩ relocation step(s) has been obtained. In this work, we have used a combination of experimental and computational methods to test the charge compensation hypothesis. Our results show that conformational changes associated with K ϩ -K ϩ exchange proceed in at least two electrogenic steps with net negative charge movement. Consistently, computations of electrostatic energies demonstrate negative valence of the relocation step. K ϩ binding depends on voltage only to a small extent. The results are consistent with a multistep charge compensation mechanism, in which fast cation binding precedes electrogenic cation exchange through an overall negatively charged transport domain. EXPERIMENTAL PROCEDURES Cell Culture, Transfection, Whole-cell Current Recording, and Site-directed Mutagenesis-HEK293 cells (American Type Culture Collection number CRL 1573) were cultured as described previously (10,12). The cell cultures were transiently transfected with wild-type or mutant EAAC1 cDNA inserted into a modified pBK-CMV expression plasmid (10) by using FuGene HD transfection reagent according to the protocol supplied by the manufacturer (Roche Applied Science). One day after transfection, the cells were used for electrophysiological measurements. Glutamate-induced EAAC1 currents were measured in the whole-cell current recording configuration. Whole-cell currents were recorded with an EPC7 patch clamp amplifier (ALA Scientific, Westbury, NY) under voltage clamp conditions. The resistance of the recording electrode was 2-3 megaohms, as described previously (12). In the whole-cell recordings performed at steady state, series resistance was not compensated for because of the small wholecell currents carried by EAAC1. However, series resistance compensation of 60 -80% as well as whole-cell capacitance cancellation were used in the whole-cell recording experiments involving step changes of the membrane potential, in order to accelerate the capacitive charging of the membrane in response to the voltage jump. Typical time constants for membrane charging under these conditions were 200 -250 s (20). Ionic Conditions for K ϩ Exchange Experiments-K ϩ /Cs ϩ exchange was established by using symmetrical [K ϩ ] on both sides of the membrane (140 mM) in the absence of Na ϩ and glutamate. The composition of the solutions was as follows: 140 mM K/CsMes, 2 mM Mg(gluconate) 2 , 2 mM Ca(gluconate) 2 , 10 mM HEPES, pH 7.3 (extracellular), 140 mM K/CsMes, 2 mM Mg(gluconate) 2 , 5 mM EGTA, 10 mM HEPES, pH 7.3 (intracellular). To obtain the specific component of the currents, we subtracted nonspecific currents in the presence of the competitive inhibitor DL-threo-␤-benzyloxyaspartate (TBOA) 2 (21). TBOA does not bind to EAAC1 in the absence of Na ϩ and may bind, but only weakly, in the presence of extracellular K ϩ . Therefore, TBOA binding was promoted by including a small amount of Na ϩ (2 mM in the presence of extracellular K ϩ and 5 mM in the presence of extracellular NMG ϩ ) in the TBOA-containing solution, as described previously (22). This small amount of Na ϩ did not elicit nonspecific currents. Under these ionic conditions, 200 M TBOA is a supersaturating concentration (about 100-fold K m ), so the TBOA binding site should be saturated at all voltages. The K m for TBOA under these conditions has been experimentally determined previously (22). Ionic Conditions for Na ϩ /Glutamate Exchange Experiments-Glutamate/Na ϩ exchange was established by using symmetrical [Na ϩ ] ϭ 140 mM and [glutamate] ϭ 10 mM on both sides of the membrane. Under these conditions, the internal and external binding sites for Na ϩ and glutamate should be saturated, based on results from previous studies (12,22). The composition of the solutions was as follows: 140 mM NaMes, 2 mM Mg(gluconate) 2 , 2 mM Ca(gluconate) 2 , 10 mM HEPES, 10 mM glutamate, pH 7.3 (extracellular), 140 mM NaMes, 2 mM Mg(gluconate) 2 , 5 mM EGTA, 10 mM HEPES, 10 mM glutamate, pH 7.3 (intracellular). Ionic conditions for forward and reverse transport were as published previously (10). For charge-voltage relationships, we used the Boltzmann equation to fit the experimental data. Here, Q max is the maximum charge movement, and Q offset is the holding potential-dependent offset of the charge movement, V1 ⁄ 2 is the midpoint potential, and F is the Faraday constant. R and T have their usual meaning, and z Q is the valence of the charge movement, which is obtained from the fit. Computation of the Valence of the Transport Domain-We have used the Adaptive Poisson-Boltzmann Solver (APBS) (23), together with the APBSmem Java routines (24) for the calculation of electrostatic energies of the glutamate transporter embedded into an implicit membrane. In the presence of an internal membrane potential, V, the following modified version of the linearized Poisson-Boltzmann equation is used, according to the formalism first introduced by Roux (25). Ϫᰔ͑⑀͑r ជ͒ᰔ͑r ជ͒͒ ϩ 2 ͑r ជ͒͑r ជ͒ ϭ e4 Here, ⑀ is the dielectric constant, which depends on the spatial coordinate, ϭ e⌽/k b T, where ⌽ is the electrostatic potential, e is the elementary charge, T is the temperature, and k b is the Boltzmann constant. is the Debye-Hückel screening constant, and is the charge density. f(r) is the Heaviside step function, which is set to 1 in the intracellular solution and is 0 in the membrane, protein, and extracellular solution. Details of the method can be found in Ref. 24. The total electrostatic energy, E, is then computed by summing up over the product of the local charge and the potential (26), where dV is the volume element. To compute the valence, membrane potentials of varying magnitude, V, are applied to the internal side of the membrane, and the difference in total electrostatic energy, ⌬E, for two protein configurations (e.g. inward and outward facing conformations) is calculated. When plotting ⌬E versus the membrane potential, the valence is obtained from the slope (27). Energy contributions that do not come from movement of protein charges have been subtracted in this approach, after calculating the electrostatic energy in the absence of protein charges. The details of the APBSmem setup are described in the supplemental Methods. The APBSmem approach was validated by using a model system, in which a Na ϩ ion was moved from the water phase into a membrane of 30 Å thickness at a distance of 10 Å below the membrane surface. As expected, the valence of this transition was 0.32. K ϩ -induced Relocation of the Transporter Is Associated with Charge Movement-We first tested whether K ϩ -dependent reaction steps of EAAC1 are electrogenic, by locking the transporter in the K ϩ exchange mode (Fig. 1A, top). Step changes in the membrane potential under K ϩ exchange conditions ( Fig. 1A) induced transient, TBOA-sensitive currents, which decayed with two-exponential components (Fig. 1A). Little voltage-induced charge movement was seen under the same conditions in non-transfected cells (Fig. 1C). As expected, the charge scales linearly with the expression level (n ϭ 12 cells; supplemental Fig. 1) and is virtually eliminated at low extracellular [K ϩ ] (5 mM; supplemental Fig. 3). The transient currents were capacitive in nature (Fig. 1, A and D). The charge movement was voltage-dependent with an apparent valence of 0.41 (supplemental Methods and supplemental Fig. 1, B and C) and with a [K ϩ ]-dependent midpoint potential (Fig. 1E). Together, these results suggest that voltage jumps result in a redistribution of the electrogenic K ϩ exchange equilibrium. This redistribution consists of at least two steps, as indicated by the two exponential components of the transient current decay (supplemental Fig. 2). One component was fast in the microsecond range (average ϭ 0.85 Ϯ 0.2 ms, n ϭ 8), and the other was about 15-fold slower (average ϭ 13 Ϯ 3 ms, n ϭ 8). Contribution of Electrostatics to Conformational Transitions- To test the hypothesis of electrogenic K ϩ -dependent relocation, we computed the valence of the K ϩ -loaded transporter, using the APBS routine (23), numerically solving the linearized Poisson-Boltzmann (PB) equation for various transporter/implicit membrane systems. The simulation setup is shown in Fig. 2A. The transporter structures were obtained by homology modeling of the EAAT3 sequence based on the GltPh (aspartate transporter from Pyrococcus horikoshii) template structures (Protein Data Bank codes 2NWX and 3KBC (2,8)). In addition, we used a simplified model, in which only the conserved charged residues were modeled in the absence of protein but retained their correct orientations (Fig. 2B). A biasing potential (Fig. 2, A, C, and D) applied to the intracellular side (24) allows the determination of voltage drop within the transmembrane domain in the absence of intrinsic charges of the protein (25). As a first approximation, we neglected the dipole potential of the membrane. Fig. 2, C and D, shows isopotential planes for transporters, in which all subunits are outward facing (Fig. 2C) and in which one subunit is inward facing ( Fig. 2D; assuming that the subunits transport glutamate independently (9,28)). Transition to the inward facing configuration results in an altered distribution of isopotential planes, with the voltage drop shifted toward the intracellular direction (Fig. 2E). Clearly, such a shift in the interaction of the membrane electric field with the transporter must result in a voltage dependence of the conformational transition if the transmembrane domain of the transporter is charged. Interestingly, insertion of the transport protein into the membrane leads to a defocusing of the transmembrane electric field, as compared with the voltage drop for the membrane-only system in the absence of protein (Fig. 2E). Several charged amino acid residues are conserved within the C-terminal transport domain of the glutamate transporter family, including five acidic amino acids and two potentially positively charged residues (Fig. 2, sequence inset). Conservative, charge-neutralizing mutation of all of these residues results in defects in glutamate transport (Fig. 2F). This result indicates that these potentially charged residues may be important for electrostatic charge compensation. Valences for the outwardto-inward facing transition for all mutant transporters, as calculated through PB analysis, are listed in supplemental Table 1. As expected, charge-neutralizing mutations at positions that move the largest distance within the membrane electric field (Asp-443, Arg-444, and Arg-446), show the largest deviations from the EAAC1(WT) valence. The C-terminal Transport Domain of the Glutamate Transporter Carries Net Negative Charge-Next, we computed the theoretical valence associated with several transitions in the transport cycle, by calculating the difference in total electrostatic energy, ⌬E, of the transporter/membrane systems before and after the structural change as a function of the membrane potential, V m (Fig. 3, A and B) (23). The slope of the ⌬E versus V m relationship is representative of the valence of the charge movement (24,27). We first investigated what is believed to be the major structural change associated with glutamate trans-port from the outward facing to the inward facing conformation (1, 2) (Fig. 3C). The valence of the charge movement was negative in the absence of any bound cations (z ϭ Ϫ1.23) or in the presence of only one bound cation (Cs ϩ , z ϭ Ϫ0.59, or K ϩ , Fig. 3A, top), with negative potential stabilizing the outward facing configuration. The valence did not depend strongly on the exact positioning of the cation (positioned in the GltPh Na1 site or in the substrate binding site, as suggested in Ref. 29). The transporter became almost neutralized with two Na ϩ ions bound. In the presence of an additional third Na ϩ ion and a proton (protonation of Glu-373 (30)) and in the fully loaded configuration (with glutamate), the valence of the charge movement reverted to a positive sign (z ϭ ϩ0.15; Fig. 3, A and B). This result suggests that the negative charge of the transporter binding sites and the bound glutamate partially compensates for the positive charges of the three bound Na ϩ ions and the proton, consistent with several reports showing inward charge movement of Na ϩ /glutamate translocation (12,19,20). Consequently, transient currents were observed when the transporter was subjected to voltage jumps in the Na ϩ /glutamate exchange mode (Fig. 3D). In the exchange mode, charge movement is caused mainly by the actual conformational transitions of the transporter but to a lesser extent by cation/substrate binding or unbinding. The apparent valence of the charge movement was 0.43 Ϯ 0.1. This valence is larger than the theoretical computed value shown above for the fully loaded transporter (ϩ0.15). Therefore, it is possible that the negative charge of the binding sites is overestimated because not all acidic side chains are deprotonated. Although previous pK a computations suggested that Asp-454 is deprotonated at pH 7.4 (31), we computed the valence with a protonated Asp-454. A z of ϩ0.40 was found for the fully loaded transporter. This value agrees well with the experimentally observed valence (0.43; Fig. 3E), raising the possibility that Asp-454 is protonated in the fully loaded transporter, consistent with its location deeply buried in the interior of the protein. Charge Movement Is Slowed by Cs ϩ Substitution-If steps associated with K ϩ binding and/or relocation are electrogenic, they should be sensitive to the nature of the monovalent cation. It is known for other glutamate transporter subtypes that Cs ϩ can substitute for K ϩ , but Cs ϩ is transported at a lower rate than K ϩ (32, 33), although transport was also increased by Cs ϩ in one report (34). Consistent with this hypothesis, forward transport currents (Fig. 4, A-C) as well as reverse transport (Fig. 4, B and D) currents in EAAC1 were reduced about 2-2.5-fold when Cs ϩ was used instead of K ϩ as the cation on the trans side of the membrane. Therefore, we used Cs ϩ substitution on both sides of the membrane (Fig. 4E, inset) to obtain further information on the voltage jump-induced charge movements. As shown in Fig. 4E, transient current relaxations in the sole presence of Cs ϩ in response to voltage jumps from Ϫ90 to 0 mV transmembrane potentials were biphasic, as in K ϩ , but displayed smaller peak amplitudes and slower relaxation kinetics. Reduction of the peak current in Cs ϩ , which resulted in a lower signal/noise ratio (Fig. 4E), is expected because the same charge is displaced over a longer time window. The fast phase of the current decay was slowed about 4-fold with a relaxation time constant of 3.3 Ϯ 1.5 ms (n ϭ 6), whereas the slow phase was about 1.6-fold slower with a time constant of 21 Ϯ 4 ms (averages shown in Fig. 4F). Relaxation rate constants in both Cs ϩ and K ϩ were smaller than the rate constants associated with equilibration of the Na ϩ /glutamate translocation step(s) (glutamate exchange mode), as shown in Fig. 4E. This result is consistent with previous suggestions that K ϩ relocation, but not Na ϩ /glutamate translocation, limits the overall turnover rate of the glutamate transporter subtype EAAC1 (10). Extracellular K ϩ Binding Is Electrically Silent-A glutamate transporter with the mutation E373Q was previously shown to be defective in K ϩ -dependent relocation (30,35) while still being able to bind extracellular potassium and catalyze Na ϩdependent glutamate translocation (30). As expected, step changes of the membrane potential to EAAC1(E373Q) in the Na ϩ /glutamate exchange mode resulted in large transient transport currents (Q ϭ 320 Ϯ 15 femtocoulombs, n ϭ 5; Fig. 5, A and C). In contrast, charge movement was virtually eliminated in the K ϩ exchange mode (Q ϭ 22 Ϯ 3 femtocoulombs, n ϭ 5; Fig. 5, B and C). This result suggests that the voltage-dependent charge movement observed in the K ϩ exchange mode is caused mainly by K ϩ translocation but not by K ϩ binding to its binding site. Consistently, the apparent affinity of the transporter for extracellular K ϩ or Cs ϩ in the reverse transport mode was virtually independent of the transmembrane potential (supplemental Fig. 4, A-C). We next performed PB calculations for K ϩ binding to three potential binding sites at positions suggested by previous mutagenesis experiments (14,29,35). The valence associated with movement of K ϩ into the substrate binding site (a binding site suggested in Ref. 29) is ϩ0.106 (Fig. 5D). In contrast, the Asp-454 cation binding site (Na1 site in GltPh) is more deeply buried in the membrane, resulting in a valence of K ϩ binding of ϩ0.34 (Fig. 5, D-F). However, direct accessibility of this site to a cation is unlikely because no aqueous pathway exists for an ion to move into this site (31). To obtain an apo-like configuration for the PB analysis, we performed molecular dynamics simulations in the absence of any bound ions/substrates (supplemental Fig. 5). In agreement with previous reports (6, 7, 31), reentrant loop 2 opens after several ns while, simultaneously, water molecules start penetrating the transporter to form an aqueous cavity leading to the Na1 site and the aspartate residue in position 405 (analogous to Asp-454 in EAAC1; supplemental Fig. 5B). Based on this apo-state, a valence for K ϩ binding to the Na1 site of ϩ0.09 was calculated, consistent with an aqueous access pathway for a cation. Finally, we analyzed cation binding to a potential binding site at position Glu-373 (z ϭ ϩ0.005; Fig. 5E). Taken together, our experimental and computational results suggest that the extent of voltage dependence of extracellular K ϩ binding is small. Direct Evidence for a Charge Compensation Mechanism-The results from voltage jump analysis do not answer questions about the sign of the charge movement (i.e. is positive or negative charge moving within the membrane electric field?). To answer this question, we performed a single turnover K ϩ exchange experiment (Fig. 6). Here, K ϩ was initially only present on the intracellular side, ensuring an outward facing K ϩ binding site. Subsequently, K ϩ was rapidly applied to the extracellular side. As expected, if rearrangement of the binding site is associated with negative charge movement, application of 140 mM K ϩ to the extracellular side was followed by a transient outward current (Fig. 6A, left). To test whether the cell functionally expressed glutamate transporters, 1 mM glutamate was rapidly applied to the extracellular side, showing the well characterized, rapidly decaying inward transient current followed by a steady-state component (Fig. 6B), demonstrating that the solution exchange procedure is fast enough to detect transient currents. The decay rate of the transient currents is governed by the time resolution of the solution exchange system (ϳ50 ms). Therefore, no quantitative rate information can be obtained in this experiment. However, the experiment directly demonstrates net negative charge of the K ϩ -loaded transporter, because inward movement of positive charge would be associated with inward transient current. No inward currents were detected in any of the 12 cells observed. The following control experiments were performed. 1) TBOA virtually abolished the current (Fig. 6A, middle). 2) The K ϩ -induced response was absent in control cells (Fig. 6D). 3) The application of Cs ϩ resulted in much smaller amplitude of the transient current, due to the lower relocation rate of the Cs ϩ -bound versus the K ϩ -bound transporter (Fig. 6C). 4) The charge moved in response to K ϩ application is proportional to expression levels (supplemental Fig. 6). 5) Transient outward current precedes steady-state reverse transport current when K ϩ is applied to the extracellular side under reverse transport conditions (supplemental Fig. 7, A-C). When glutamate was applied to the same cell, inward transient current, but no steady-state component, was observed (supplemental Fig. 7D). Together, these control experiments show that the outward charge movement is specifically caused by the glutamate transporter. Upon removal of K ϩ , formation of a transient inward current would be expected, if the charge movement is capacitive. This was seen in some but not all cells. The non-consistent nature of observable inward current is most likely caused by difficulties in removing ions rapidly through solution exchange. As shown previously by Wadiche et al. (36), transient currents are induced by voltage jumps in the presence of Na ϩ but in the absence of transporter turnover. To test the direction of FIGURE 5. Results from EAAC1(E373Q) reveal that charge movements in the presence of K ؉ and Cs ؉ are not caused by cation binding. A and B, current response after stepping the membrane potential in an EAAC1(E373Q)-expressing cell from Ϫ100 to 0 mV in the K ϩ exchange (B) and Na ϩ /glutamate exchange modes (A); 140 mM K ϩ or 140 mM Na ϩ , 10 mM glutamate on both sides of the membrane. C, charge moved in experiments A and B at Ϫ100 mV from an average of n ϭ 5 cells. D, PB computations of the valence of K ϩ binding to EAAC1(WT), as illustrated in F. E, computed valence for binding of K ϩ to several different potential binding sites (EAAC1(WT)). F, structural model used for the PB calculations of extracellular K ϩ binding to the hypothetical binding site near the Glu-373 side chain. The arrow indicates the binding event, and the yellow sphere represents the K ϩ ion before and after binding. Error bars, S.D. Na ϩ -induced charge movement, we rapidly applied 140 mM Na ϩ to the extracellular side of EAAC1. As shown in supplemental Fig. 8, a transient inward current was observed (n ϭ 3 cells). This result is consistent with previous models on electrogenic Na ϩ effects (36), showing differential and opposite effects of K ϩ and Na ϩ interaction with the transporter. To test the predictions of the single turnover K ϩ exchange experiments, we performed numerical simulations according to the kinetic scheme shown in Fig. 6G. As shown in Fig. 6E, the experimentally observed transient and steady-state inward current induced by glutamate application is well represented by these simulations. Using a valence of Ϫ0.8 for the K ϩ relocation step, the K ϩ -induced current can also be reproduced well by the simulations (Fig. 6F). DISCUSSION The most important conclusion from this work is that transport of glutamate and the co-transported Na ϩ ions is based on a charge compensation mechanism, in which intrinsic negative charge of the transporter binding site partially compensates for the three positive charges of the bound cations/substrate in the fully loaded transporter in the translocation step, and overcompensates for the single positive charge of the bound K ϩ ion in the relocation step of the empty transporter. Computationally estimated valences of the transporter in various states are in excellent agreement with this conclusion, independent of the protonation state of the conserved amino acid side chain aspartate 454, which is at present ambiguous. In contrast, extracellular K ϩ binding is most likely electrically silent because extracellular water penetrates the cation permeation channel in the apo-state of the transporter. At present, our results do not allow us to draw conclusions about the electrogenic nature of intracellular cation binding, although it has been previously suggested that intracellular Na ϩ binding and/or conformational changes associated with it cause transmembrane charge movement (22). Structure function relationship studies on transport systems have focused on ionizable amino acid residues in transmembrane domains with potential negative charge (37), including several reports on glutamate transporters (14,16,35). In reports from the Wright laboratory, it was proposed that the sodiumglucose transporter, SGLT1, contains negative charge in its sodium binding site(s), counterbalancing the two positive charges of the co-transported Na ϩ ions (38). Similar mechanisms were proposed for the Na ϩ /phosphate transporter (39). For SGLT1, evidence was based on the fact that the Na ϩ /glucose translocation step(s) are associated with little charge movement and that voltage jumps applied to the empty transporter result in transient currents, which are sensitive to the SGLT1 inhibitor phlorizin. Although such voltage jump experiments, similar to the ones performed here, prove that the empty transporter is charged, they do not provide evidence on the sign of the charge. To demonstrate this point, we have performed simulations of concentration jumps and voltage jumps (supplemental Fig. 9). Whereas the [K ϩ ] jump (single turnover) experiment allows a clear differentiation between inward and outward charge movement in the presence of positive or negative charge of the transport domain (supplemental Fig. 9, A and C), the voltage jump experiment shows only minor differences in the kinetics of the transient current signals, but, as expected, the sign of the current is the same in both cases (supplemental Fig. 9, B and D). Thus, when analysis is based on voltage jump data only, conclusions about the sign of the charge of the binding sites can only be obtained by indirect kinetic modeling or site-directed mutagenesis of charged residues. The glutamate transporter, therefore, represents a valuable model system because, in contrast to many other secondary active transporters, relocation of the empty transporter is not spontaneous but rather triggered by K ϩ binding (17). This functional property allowed us to perform the single turnover K ϩ exchange experiments shown in Fig. 5, providing direct proof of negative charge of the K ϩ -loaded transport domain. A noteworthy result from voltage jump analysis is that the relaxation of the transient current is biphasic (supplemental Fig. 2), suggesting that the underlying molecular processes consist of at least two steps. The rate of decay of the slow step is consistent with rate constants previously estimated for the relocation step (10). Furthermore, the apparent valence associated with this charge movement is consistent with the valence of the inward facing to outward facing transition computed using the PB formalism. Therefore, we propose that the slow component of the charge movement is caused by the major conformational reorientation of the transport domain within the membrane (see Fig. 6 for a proposed kinetic mechanism). The relaxation of the slow phase is slower than the relaxation of transient currents in the glutamate/Na ϩ homoexchange mode (Fig. 4F). This result provides additional evidence that the K ϩ -induced relocation reaction is rate-limiting for the overall transport cycle in the forward direction (10). The Cs ϩ substitution experiments are also consistent with this proposal. Although we cannot directly assign the fast relaxation phase of the transient current to a distinct process in the transport cycle, it can be speculated that it is caused by relaxation of the opening/closing equilibrium of either the internal or the external gate of the transport domain (Fig. 6). Although the gate-opening process is less defined for the internal gate, both structural and molecular dynamics simulation evidence indicates that the external gate is open in the apo-form of the transporter (6,7). Therefore, it is likely that this gate has to close first in the K ϩ -bound state, before the transport domain can move within the membrane. In contrast to the electrogenicity of these structural changes, our molecular dynamics simulations and experiments with the E373Q mutant transporter suggest that the K ϩ binding process is electrically almost silent. Our results are summarized and illustrated in a kinetic mechanism for K ϩ -induced relocation shown in Fig. 7. In this mechanism, electroneutral binding of K ϩ to the extra-or intracellular binding sites is followed by potentially electrogenic closure of the reentrant loops, with subsequent electrogenic movement of the negatively charged transport domain within the membrane. It should be noted that the electrical properties of the intracellular binding/gate closure reactions are speculative at this point. Charge compensation mechanisms, such as the one postulated here, may be a general feature of ion-coupled transporters. This would suggest that the charge of transported cations must be at least partially compensated for to allow efficient ion translocation, which otherwise would have to overcome large electrostatic barriers of inserting a significant amount of charge into the low dielectric environment of the membrane. For example, the Born energy for inserting one Na ϩ ion from water into the low dielectric membrane (⑀ ϭ 2) is 350 kJ/mol. Viewed as an activation energy, such high values are prohibitive for transport, considering that the translocation steps of the glutamate transporter have activation energies no higher than 110 kJ/mol (20). Therefore, it can be hypothesized that Na ϩ -coupled transporters not only require compensation of the charge of the Na ϩ ion but also need to fine tune charge balance of the charge-translocating and -relocating steps (e.g. by countertransporting K ϩ in the case of the EAATs) to avoid paying this electrostatic cost. In addition to charge compensation, the glutamate transporters employ two other strategies to minimize electrostatic barriers. 1) The electric field of the membrane is defocused (Fig. 2E). This defocusing reduces the voltage dependence of individual steps. 2) The charge movement is distributed over many individual kinetic steps in the transport cycle, as indicated in the mechanisms shown in Fig. 6. Therefore, each individual step has less voltage dependence and a reduced potential to be inhibited by unfavorable transmembrane potentials. Together, these three mechanisms may lead to a relatively shallow voltage dependence of the transport rate, which is found for the glutamate transporter, as well as for many other secondary active transport proteins, for which detailed electrophysiological data are available (a large list of studied systems includes Refs. 10 and 38 -40). Reducing the voltage dependence of substrate transport is particularly important for the glutamate transporter because of the large number of positively charged cotransported ions (three Na ϩ and one H ϩ (41)). If these charges were transported across the membrane in a single step, transport would be strongly inhibited upon depolarization. In conclusion, our results provide direct evidence for a charge compensation mechanism of the glutamate transporters, with negative charge of the transport domain overcompensating for the single positive charge of the countertransported potassium ion but only partially compensating for the three positive charges of translocated 3Na ϩ /H ϩ /glutamate Ϫ . Together with defocusing of the membrane electric field and distribution of charge movement over many weakly electrogenic steps in the transport cycle, glutamate transporters and possibly other Na ϩ -coupled secondary active transporters employ this mechanism to prevent paying a large electrostatic energetic penalty for movement of a substantial number of charges through the low dielectric environment of the membrane.
8,122.8
2012-06-15T00:00:00.000
[ "Biology", "Chemistry" ]
Recognition for lateral faces using Neural Networks Face recognition is most difficult and complicated technique. Recognition of lateral faces is very difficult compare with normal face recognition. Pattern recognition is mostly used in this system to recognise the lateral face patterns (LFP). Neural network is used to find the patterns and lateral face recognition can be done by this technique. After the many researches face recognition becomes difficulty for the various techniques based on their parameters. In this paper, the amalgamative lateral face recognition(ALFR) which is merged with machine learning and neural network features can be done by using synthetic dataset consists of 200 lateral faces. Performance shows the improved results of proposed technique. Introduction Pattern recognition (PR) is a cutting edge machine learning issue with various applications in a vast field, including lateral face recognition (LFR), Character recognition (CR), Speech recognition (SR). The field of example acknowledgment is still especially in it is outset, in spite of the fact that as of late a portion of the boundaries that hampered such mechanized LFR has been lifted because of advances in PC equipment giving machines prepared to do the quicker and progressively complex calculation. FR is the most tedious task for the human brain. It is ordinarily utilized in applications, for example, human-machine interfaces and programmed access control frameworks. FR includes contrasting a picture and a database of put away faces so as to distinguish the person in that information picture. The related errand of face discovery has direct pertinence to confront acknowledgment since pictures must be broken down and faces distinguished before they can be perceived. Identifying faces in a picture can likewise center the computational assets of the face acknowledgment framework, streamlining the frameworks speed and execution. Face identification includes isolating picture windows into two classes; one containing faces (targets), and one containing the foundation (clutter). It is troublesome in light of the fact that despite the fact that shared characteristics exist between faces, they can differ significantly as far as age, skin shading and expression on faces. LFR is an intriguing and effective use of Pattern acknowledgment and picture investigation. Facial pictures are basic for savvy vision-based human-PC association. Face preparing depends on the way that the data about a client's personality can be removed from the pictures and the PCs can act likewise. Face recognition has numerous applications, extending from diversion, Information security, and Biometrics [1]. Various strategies have been proposed to identify faces in a solitary picture. To construct completely mechanized frameworks, strong and effective face identification calculations are required. The face is identified once an individual's face comes into a view [2]. When a face is recognized, the face locale is edited from the picture to be utilized as "Test" into the information to check for potential matches. The face picture is preprocessed for variables, for example, picture size and enlightenment and to identify specific highlights. The information from the picture is then coordinated against the learning. The coordinating calculation will create a likeness measure for the match of the test face into the information. An Amalgamative face recognition (AFR) strategy where nearby highlights are given as the contribution to the neural system. To start with, the face locale is separated from the picture by applying different pre-preparing exercises. The technique for finding the face district is known as face confinement. The neighborhood highlights, for example, eyes and mouth are removed from the face district. The separation between the eyeballs and the separation between the mouth endpoints are determined to utilize the distance computation algorithm. At that point the separation esteems between the left eye and the left mouth endpoint, the correct eye and the correct Research Article Research Article Research Article mouth endpoint, the left eye and the correct mouth endpoint, the correct eye, and the left mouth endpoint are determined. These qualities are given as contributions to the neural system. Methodologies In this chapter, various methodologies are discussed. Feature Based Lateral Face Recognition (FBLFR) The FBLFR method performs on human face. Input image can be in different face orientation where transformation of feature space is learned and applied on face for feature extraction. Objective is human's various face features like left eye, right eye, nose, mouth are to be extracted. Viola -Jones Skin detection method is best for feature extraction. In multiview face recognition as shown in Figure 3 and 4, face image pass as an input then local feature of face to be extracted. In result it develops mirror image of any best side of human face as 2D mug shown in Figure 3.1. ROI Face Detection & Alignment It involves the only region of the face from datasets and target sample. These pictures can have human body segment like the neck, bind, fabric, top or whatever other things that aren't required for acknowledgment. Utilizing viola-jones face identification system creators remove just face area. Viola-jones face identification system has three indispensable advances: I) include extraction, ii) boosting iii) multi-scale discovery. Distinguished face district may have a minor cross face as of typical human behavior. It conceals any hint of failure district as an element vector for further examination. Face Features Vector Generation Target images may have other background objects too, so using viola-jones face detection technique proposed method extract available faces from target image scene. At the end of this phase all faces are extracted and store it properly for future verification operation. In normal environment human face may not be straight always, so in recognition method position of face feature landmark is changed, and as a result it gives less matching value so face recognition might be unsuccessful. For solution of this issue, we can rotate whole face by calculating distance from X and Y axis of both eyes. y = left eye from Y axis − right eye from Y axis x = left eye from X axisright eye from X axis Rotation angle = arctan (y/x) Using above equation we have rotate whole face to prepare it straight then send for further steps. End of this step all faces are extracted and store it in form of feature vector for feature extraction. Integrated Deep Model for Face Detection and Landmark Localization from ''In the Wild'' Images In recent years the face detection and landmark localization are two main factors in facial analysis applications. Many of the issues are solved to detect the face recognition which increases the precision of face detection [20]. This reference proposed the novel method the Integrated Deep Model (IDM) and adopted the two traditional deep learning techniques such as Faster R-CNN and a stacked hourglass which improves the face detection precision and accurate landmark localization. The optimization function is integrated with the proposed system which increases the accuracy and reduces the false positive rate which is 63%. The IDM technique uses the Annotated Faces In-The-Wild, Annotated Facial Landmarks in The Wild and Face Detection Dataset and Benchmark face detection test sets and shows a high level of recall and precision when compared with various existing methods. The dataset used in this system is 300-W test sets which are focused on localization accuracy with original bounding boxes. The increase with our proposed system is 0.005% maximum with facial landmarks which border the face. Deep Learning Face Attributes in the Wild Face attributes prediction is the most complicated issue to find the various complex face variations. In this system, they have shown the two categories such as LNet and ANet are the finely trimmed combined with tags and these are previously trained. For the face localization, LNet is the pre-trained traditional massive item and the other ANet is used to predict the attributes [21]. The proposed system provides not only accuracy but also the facts based on face learning. This will increase the face localization (LNet) and attribute prediction (ANet) with different pre-training techniques. The fine-tuned filters are used to get the image-level attribute tags and reply to the maps over all the images. This will also explain the high-level hidden neurons of ANet automatically which finds the semantic items after training with massive face findings. Experiments This is the synthetic dataset which consists of various lateral faces that can be used for training and then the input is given to system for face detection of lateral faces. These experiments are done by using the java programming language. There are 3 parameters are shown in the bases of performance such as sensitivity, specificity and accuracy. False Positive Rate (FPR) The percentage of cases where an image was classified to normal images, but in fact it did not. False Negative Rate (FNR) The percentage of cases where an image was classified to abnormal images, but in fact it did. Sensitivity The proportion of actual positives which are correctly identified is the measure of the sensitivity. It relates to the ability of the test to identify positive results. The proportion of negatives which are correctly identified is the measure of the specificity. It relates to the ability of the test to identify negative results. The following steps are utilized by the ALFR system 1.) Initialize the lateral face0 images from the datasets. Conclusion In this paper, the proposed system focuses on improving the accuracy, sensitivity and specificity for the lateral faces. It is very needed for every face recognition according to the lateral faces. Recognition of lateral faces is mostly difficult to get the accurate result. But the proposed system works according to the input lateral face image.
2,212
2021-04-10T00:00:00.000
[ "Computer Science" ]
Mechanical Behavior of Ultrafine Gradient Grain Structures Produced via Ambient and Cryogenic Surface Mechanical Attrition Treatment in Iron Ambient and cryogenic surface mechanical attrition treatments (SMAT) are applied to bcc iron plate. Both processes result in significant surface grain refinement down to the ultrafine-grained regime; the cryogenic treatment results in a 45% greater grain size reduction. However, the refined region is shallower in the cryogenic SMAT process. The tensile ductility of the grain size gradient remains low (<10%), in line with the expected behavior of the refined surface grains. Good tensile ductility in a grain size gradient requires the continuation of the gradient into an undeformed region. Introduction Numerous reports now exist indicating an order of magnitude increase in strength is possible in metals and alloys that exhibit grain sizes approaching the lower limit of nanocrystallinity.While achieving high strength has never been a problem, the ability to achieve any amount of uniform elongation (the prerequisite for appreciable ductility) has been a challenge.However, several methods have recently been developed to mitigate this strength-ductility tradeoff through the engineering of multi-length scale structures including bimodal grain size distributions [1,2], nanoscale twins [3,4], and grain size gradients [5].Specifically, gradient microstructures generated through surface mechanical attrition treatments or SMAT have additional benefits over other hierarchical microstructures from a surface science/tribological standpoint by concentrating the nanocrystalline properties in the surface region.For instance, nanostructured surface layers have shown improved corrosion resistance [6][7][8][9], wear [10,11] and fatigue [12][13][14], and irradiation resistance [15]. Current SMAT techniques have shown to be very efficient methods for producing grain size gradients, inducing substantial surface grain refinement and varying depths and grades of grain refinement.It has been shown that differences in processing methods can greatly affect both the overall structures (e.g., depth of refined region and "slope" of the grain size gradient) and the individual microstructures (e.g., surface grain size [16], deformation artifacts within grain size regions [16,17]).It was noted by Tao et al. [17] in their work introducing SMAT that finer grains would be expected with plastic deformation at lower temperatures.Indeed, Darling et al. provided the first evidence in a brief report for this effect through a cryogenic SMAT process on copper [16].The percent reduction in grain size (60%) due to cryogenic processing is in good agreement with the empirical correlation between the resulting grain size and the Zener-Holloman parameter (combined metric of strain rate and deformation temperature) [18].In a magnesium alloy, a different surface treatment method resulted in a 63% decrease in grain size for cryogenic burnishing versus ambient [19]. As compared to copper, bcc iron would be expected to have a lesser reduction in grain size based on this empirical parameter in addition to the differences in plastic deformation behavior; nanocrystalline/ultrafine-grained iron exhibits essentially no strain hardening [20,21] and an inverse relationship with strain rate sensitivity as compared to fcc materials [21]-especially important as the strain rates involved in the SMAT process are relatively high (~10 2 ).The cryogenic SMAT process will also take the iron well below its ductile to brittle transition temperature.In this work, we look at the effects of cryogenic and ambient SMAT processing on the microstructure and mechanical properties of iron. Experimental Section The SMAT process was applied to 0.6 cm thick discs 6.35 cm in diameter cut from a rod of ARMCO iron (Goodfellow, Huntington, UK; purity > 99.85%Fe).Details of both the cryogenic and ambient SMAT processes can be found in [16].Briefly, the material to be treated is fitted onto one end of the vial in a mechanical alloying mill (SPEX, Company, Metuchen, NJ, USA); the milling media within the vial, in this case 50 g of stainless steel shot, continually impacts the surface at high rate and variable direction during the SMAT process.For the cryogenic SMAT process, the milling vial is enclosed by a Teflon sleeve through which liquid nitrogen is continuously flowing throughout the treatment.The iron plates were polished to a mirror finish before treatment.The SMAT process was performed for one hour for both the ambient and cryogenic treatments. Following the SMAT processes, the plates were sectioned and polished by a series of steps down to 1 μm alumina.The microstructural analysis was performed using an FEI Nova 600i dual beam (FEI, Hillsboro, OR, USA) Focused Ion Beam (FIB) system.Focused ion beam channeling contrast images (FIBCCI) are obtained using backscattered electrons produced by the ion beam as it rasters across the sample surface.The FIBCCI contrast mechanism is due to changes in the grain orientations that cause variations in ion channeling efficiency, i.e., crystals which are able to channel more effectively due to their orientation produce fewer detectable electrons, so orientations closer to incident ions show up darker, i.e., crystal orientation specific contrast. Hardness measurements were obtained with a Wilson Hardness Tukon 1202 (Buehler, Lake Bluff, IL, USA) using a load of 50 g load with 10 s dwell time with three measurements at each depth.Tensile test dogbones were cut from the SMAT plates with a MicroProtoSystems DSLS 3000 micromill (MicroProtoSystems, Chandler, AZ, USA) with the approximate gauge dimensions: 5 mm length, 1 mm width, and thickness of ~350 µm.The tensile tests were performed on a custom miniature tensile test apparatus which utilizes digital image correlation to track the sample extension.Three tensile tests for each sample were performed at a load rate of 2 μm/s with a 125 lb load cell. Microstructure The FIBCCI contrast micrograph (Figure 1A) reveals the initial grain size of the iron plate to be 50-100 μm.After the SMAT treatment at ambient temperature (Figure 1C), the plate exhibited submicron grains up to ~200 μm deep into the sample, with plastic deformation artifacts continuing up to about 700 μm.The average surface grain size (measured within the top 5 μm of the plate) was 650 nm.In contrast, the average surface grain size for the cryogenic SMAT treatment of the same duration was 350 nm (Figure 1C).As in the case of cryogenic SMAT copper, which showed a 60% reduction in grain size with respect to the ambient [16], the cryogenic SMAT iron followed the same trend of higher grain refinement than the ambient SMAT treatment, but to a lesser extent.The reduction of grain size by only ~45% in the iron follows literature trends for microstructural refinement as described by the strain/temperature pairing through the Zener-Holloman parameter [18,22].Iron has higher activation energy for deformation than copper, generally taken in pure metals as similar to the activation energy for self-diffusion; therefore, the grain refinement is less receptive to changes in temperature.In addition to the differences in surface grain size, the grain size gradient in the cryogenic SMAT iron is significantly sharper, exhibiting only a ~50 μm region of submicron grains and ~300 μm region of plastic deformation.The surface of the cryogenic SMAT iron also shows some surface cracks, as can be seen in the far upper right of Figure 1B. Mechanical Properties The microhardness as a function of depth into the plate is shown in Figure 2. The cryogenic SMAT sample had a higher surface hardness of 2.6 GPa compared to the ambient SMAT plate of 2.4 GPa, in line with predictions from the Hall-Petch relationship for iron [23].The hardness of the cryogenic SMAT plate reduces more rapidly than in the ambient SMAT plate-dropping from 2.6 GPa to 2 GPa within the first 50 μm and then to ~1.7 GPa within the first 100 μm, mirroring the steepness of the gradient compared to the ambient cross sections.However, after the first ~100 μm, there is no significant difference in the hardness-as the grain size increases out of the ultrafinegrained regime, the variance in hardness with changes in grain size is minimal.Additionally, while the grain size grows rapidly in the cryogenic SMAT plate, the larger grains still contain a significant amount of deformation artifacts such as dislocation walls and tangles [17,24,25], as can be seen in the changing contrast in the channeling images.These microstructural features, internal to the grain boundaries, can also contribute to the observed hardness of the material.The yield strengths of the cryogenic and ambient SMAT iron samples were 345 and 385 MPa respectively, significantly higher than that of the untreated iron plate (~150 MPa), as seen in Figure 3.While the higher surface grain refinement in the cryogenic SMAT gradient led to a higher surface hardness commensurate with Hall-Petch behavior (Figure 2), the opposite relationship is observed in the tensile tests, with the ambient SMAT iron exhibiting a higher yield strength.The gradient structure in the cryogenic SMAT plate only exhibited significant grain refinement to a depth of about 50 μm, comprising about 14% of the tensile dogbone thickness.The ambient SMAT gradient penetrated much deeper into the plate, encompassing closer to 60% of the thickness of the tensile specimen.A greater volume fraction of the tensile specimen is then comprised of ultrafine grains in the ambient SMAT iron, resulting in a higher overall yield strength.A similar result was observed in tensile specimens composed of a gradient twin layers, wherein the depth of the gradient into the dogbone sample was found to correspond to the strength according to a rule of mixtures [26].An additional contribution to the poor mechanical behavior in the cryogenic SMAT iron may be due to the difference in surface condition between the two processes.As can be seen in Figure 1B, the surface of the cryogenic SMAT plate can exhibit small cracks than are attributed to the expected brittle (versus ductile) behavior at the greatly reduced processing temperature. Both the cryogenic and ambient SMAT iron displayed very little uniform elongation before exhibiting significant strain softening in contrast to the strain hardening behavior of the initial iron plate (Figure 3).Nanocrystalline bcc iron usually exhibits brittle fracture in tension, while strain softening is observed in ultrafinegrained iron [20,27,28] in grain sizes as large as 4 μm [29].A stress drop (e.g., the amount of softening) of ~400 MPa from yield to failure has been observed in homogenous ultrafinegrained iron samples of similar grain size to that of the surface grains in this work [20,[28][29][30].The total elongation is also in line with homogenous ultrafinegrained iron produced through ECAP [20,28,30] while the overall strength is lower due to the increasing grain size of the gradient out of the ultrafinegrained regime.In contrast to this observed strain softening behavior, it was reported in [25] that a grain size gradient structure in steel exhibited extraordinary strain hardening; additionally, a grain size gradient in copper displayed significant strain hardening and tensile elongation as well [5].To examine these differences, we first look at the two literature reports of significant strain hardening in grain size gradients.In contrast to iron, nanocrystalline and ultrafine-grain copper can display some strain hardening behavior [31].Additionally, in the case of the grain size gradient in [5], the significant plastic deformation is found to be dominated by mechanically driven grain growth throughout the grain size gradient during loading. The strain hardening behavior in the steel grain size gradient was not found to be a result of mechanical grain growth [25]; the varying regions of the gradient structure were tested separately and together to reveal a synergistic effect between the gradient region and the undeformed core material.When the gradient structure was isolated (e.g., the top 120 μm of the sample was tested separately), it did not exhibit strain hardening behavior, rather strain softening, with a yield drop of almost 100 MPa and elongation of <10%.Significant strain hardening was observed only when the tensile sample thickness included both the gradient layer and a considerable fraction of the undeformed steel; the gradient layer represented about 12% of the tensile sample thickness [25].Additionally, the hardening behavior was measured as a function of depth through hardness measurements that were performed after the tensile test.Hardening was only exhibited towards the back end of the gradient structure (as the gradient transition to the undeformed core), where the grain size was much greater than 1 μm. In this work on grain size gradient iron, the thickness of the tensile test samples was ~350 μm which encompasses only the refined grain size gradient and heavily deformed regions, and none of the pristine non-deformed coarse-grained core.The low tensile ductility is therefore a result of this truncation of the gradient before reaching the undeformed core, the tensile behavior of both the cryogenic and ambient SMAT processed iron is in line with that of the stand-alone steel gradient layer in [25] which exhibited low elongation and a lack of strain hardening.Standalone tests of the gradient surface layer in copper were also consistent with this mechanical behavior [5,32]. While still exhibiting good ductility, the grain size gradient in Cu-Zn [33] does not improve upon the strength-ductility tradeoff in homogenous grain size materials as significantly as the copper [5] and steel [25] gradients.The grain refinement in the Cu-Zn study is not quantified, but most of the grain sizes in the gradient appear to be much larger than 1 μm; additionally, the hardness measurements indicate that the entire thickness of the tensile samples (600 μm) has been plastically deformed, preventing the unusual elongation and hardening behavior accessed by the studies including the undeformed core [5,25]. The standard strength-ductility tradeoff associated with grain refinement is shown in Figure 4-a typical boundary region is shown with the dashed curve drawn through data points for homogeneous grain structures of the same material as the gradient structures: pure iron [34] (gray circles), pure copper [31,[35][36][37][38][39][40] (gray diamonds), steel [25] (gray triangles), and Cu-Zn alloys [33] (gray squares).These points are data from bulk samples of various homogeneous grain sizes and processing methods for comparison with the gradient grain structures of the sample material.The strength and elongation of the existing data for grain size gradient structures -this work in iron (magenta circles), Cu-Zn alloys (red squares), steel (black triangle), and copper (orange diamond) are plotted with respect to the bulk literature data.Only the gradient structures that include a significant fraction of non-deformed grains in the tensile specimen (copper [5] and steel [25]) are significantly off the tradeoff curve for their pure homogenous counterparts.This further supports the work of [25], which describes the unusual synergistic effect of the deformed gradient layer and the coarse-grained core.The typical strain softening behavior in bcc iron and the lack of an undeformed core section in the tensile specimens in this work explain the relatively poor position on the frontier of the strength-ductility tradeoff for the cryogenic and ambient SMAT iron, as compared to the other three gradient systems.[33] (red squares), copper [5] (orange diamond); steel [25] (black triangle).Literature values for bulk structures in the same materials are shown in gray (iron: gray circles [34], steel: grey triangles [25] and references therein, Cu-Zn alloys: grey squares [33] and references therein, copper: gray diamonds [31,[35][36][37][38][39][40]).The strength ductility tradeoff is illustrated by the dotted line.The surface strength for each gradient structure is depicted with an open symbol of the same color and at the same elongation point, connected by a gradient arrow. The surface strength of the grain size gradient structures is also plotted in Figure 4, at the sample elongation as the gradient material and marked with an open symbol of the same color.The surface yield strength shown is calculated from surface hardness measurements (yield ~ H/3) for the iron in this work, the Cu-Zn example, and the steel example; the surface yield strength for the pure copper example was from a tensile test of a free standing foil cut from the surface.In the work on steel [25], strength measurements reported from tensile tests performed with a foil cut from the surface layer and the hardness tests were congruent.While the overall strength and ductility of the gradient structure may not be a significant improvement on a bulk sample of a similar grain size, the surface strength is consistently a marked improvement over a homogenous grain size structure at the same elongation.This difference highlights an engineering advantage of grain size gradient materials, in that the surface of a grain size gradient structure can be as much as eight times harder than a homogenous grain size part of similar ductility. Conclusions The application of surface mechanical attrition treatment at both ambient and cryogenic temperatures to bcc iron plate resulted in significant surface grain refinement and resulted in a grain size gradient.The cryogenic SMAT produced a 45% greater grain size reduction as compared to the ambient SMAT, but a shallower depth of grain refinement.Consequently, the surface hardness was higher for the cryogenic SMAT, but the tensile strength and ductility was lower, due to the lower volume fraction of ultrafine-grains.Strain softening is observed, in line with iron with homogenous grain sizes in the ultrafine-grain regime.The tensile elongation of both grain size gradients remains low (<10%), in contrast to the extraordinary strain hardening observed in grain size gradient work in steel [25], due to the lack of undeformed core region in the tensile samples.Moving forward, the relationship between the volume fraction of gradient grains/deformed region and ductility should be explored in order to successfully exploit the benefits of nanocrystalline surface layers while maintaining ductility in the larger part. Figure 1 . Figure 1.(a) Initial microstructure of the ARMCO Iron plate; (b) Surface microstructure following the cryogenic surface mechanical attrition treatment (SMAT), showing considerable grain refinement and plastic deformation; (c) Surface microstructure following ambient SMAT treatment.The grain refinement continues a considerable distance into the material. Figure 2 . Figure 2. Microhardness of iron plates treated by the SMAT process as a function of depth into the sample.The values for the cryogenic SMAT iron are indicated by blue squares; ambient SMAT by purple circles.Dashed lines are a guide to the eye. Figure 3 . Figure 3.Tensile behavior for cryogenic SMAT (blue curve), ambient SMAT (purple curve) iron, and untreated iron plate (gray curve).The cryogenic SMAT exhibits lower strength and ductility than the ambient SMAT.Both SMAT treated plates show improvement in strength and decrease in ductility as compared to the untreated iron. Figure 4 . Figure 4. Tensile data existing for gradient grain structures are depicted for the iron SMAT in this work (magenta circles), Cu-Zn alloys[33] (red squares), copper[5] (orange diamond); steel[25] (black triangle).Literature values for bulk structures in the same materials are shown in gray (iron: gray circles[34], steel: grey triangles[25] and references therein, Cu-Zn alloys: grey squares[33] and references therein, copper: gray diamonds[31,[35][36][37][38][39][40]).The strength ductility tradeoff is illustrated by the dotted line.The surface strength for each gradient structure is depicted with an open symbol of the same color and at the same elongation point, connected by a gradient arrow.
4,286.8
2015-06-03T00:00:00.000
[ "Materials Science" ]
Secure by Design : Cybersecurity Extensions to Project Management Maturity Models for Critical Infrastructure Projects Cybersecurity attacks on information technology (IT) systems are becoming increasingly frequent and sophisticated (Bailey et al., 2014). Critical infrastructures – the assets essential for the functioning of a society and economy (Public Safety Canada, 2009) such as power generation and distribution, transportation systems, healthcare services, and financial systems – are increasingly reliant on networked IT systems (Rahman et al., 2011; Xiao-Juan & Li-Zhen, 2010). Securing these interconnected IT systems from cyber-attack is thus of growing concern to many stakeholders (Merkow & Raghavan, 2012). Security experts argue that security should be “designed in” to critical systems upfront, rather than retrofitted later (Hughes & Cybenko, 2013; McGraw, 2006; Pfleeger et al., 2015). Publisher The Technology Innovation Management Review is a monthly publication of the Talent First Network. In July, we welcome professors Patrick Cohendet and Laurent Simon from HEC Montréal as guest editors for a special issue on the theme of Creativity in Innovation. For our August and September issues, we are accepting general submissions of articles on technology entrepreneurship, innovation management, and other topics relevant to launching and growing technology companies and solving practical problems in emerging domains. Please contact us (timreview.ca/contact) with potential article topics and submissions. We hope you enjoy this issue of the TIM Review and will share your comments online. From the Guest Editors It is our pleasure to be guest editors for the June 2015 issue of the TIM Review on Critical Infrastructures and Cybersecurity. This is the seventh issue of the TIM Review on the theme of cybersecurity, but it is the first to focus specifically on critical infrastructures -the assets essential for the functioning of a modern society. Along with the publication last month of Cybersecurity: Best of TIM Review, the fourth and newest title in the "Best of TIM Review" book series, this issue contributes to the growing body of work on cybersecurity advanced by the TIM Review. This issue comprises four research articles and a report on a recent TIM lecture. All five articles share a connection with Carleton University in Ottawa, Canada, and Carleton's Technology Innovation Management (TIM; timprogram.ca) program. The first three articles arose from a TIM "Advanced Topics" graduate course on critical infrastructures and cybersecurity that included twelve expert guest speakers from six different critical infrastructure sectors speaking about "What challenges keep you up at night?" The fourth article presents research results obtained from a Master of Applied Science thesis at Carleton. The fifth article reports on a Carleton cybersecurity event. The guest editors, Steven Muegge, an Assistant Professor at the Sprott School of Business at Carleton University, and Dan Craigen, a Science Advisor at the Communications Security Establishment and a Visiting Scholar at the Carleton's Technology Innovation Management program, contribute a design science perspective on constructing critical infrastructures. The article introduces a five-step "learning machine" design process anchored around evidence-based design principles, proposes an initial set of seven critical infrastructure design principles that are grounded in theory and evidence, and illustrates the application of the process by developing the design principles from lessons learned from theory and practice. The pro- Introduction Three problems hinder the construction of critical infrastructure and communication of cybersecurity risks. First, reliable information on the risks of cyber-attacks to critical infrastructures is not readily available. Governments and critical infrastructure owners and operators have placed a veil on reliable information related to cyber-attacks to critical infrastructure (Quigley et al., 2013). Second, cybersecurity specialists who brand themselves as "cyber gurus" manipulate cognitive limitations for the purpose of over-dramatizing and oversimplifying cybersecurity risks to critical infrastructure (Quigley et al., 2015). Third, information sharing across critical infrastructures is constrained by a number of issues, including institutional culture (Baker, 2010;Hood, 1998;Relyea, 2004), and secrecy, competition, and public image (Quigley & Mills, 2014). Critical infrastructures are those assets or systems that are essential for the maintenance of vital societal functions (Council of the European Commission, 2008). Examples of critical infrastructures include energy and utilities, finance, food, government, information and communication technology, health, water, safety, and manufacturing (Public Safety Canada, 2014). Each critical infrastructure has areas of relative strength. For example, nuclear power generation excels at planning and regulation, with strong centralized governance that audits and enforces compliance with standards. Telecommunications excels at real-time monitoring and resilience against continuous, voluminous, and ever-changing attacks. Municipal government infrastructures excel at reactive and flexible responserapidly replying in a measured way as threats are detected. However, despite the evident opportunity for learn-Academics are increasingly examining the approaches individuals and organizations use to construct critical infrastructure and communicate cybersecurity risks. Recent studies conclude that owners and operators of critical infrastructures, as well as governments, do not disclose reliable information related to cybersecurity risks and that cybersecurity specialists manipulate cognitive limitations to overdramatize and oversimplify cybersecurity risks to critical infrastructures. This article applies a design science perspective to the challenge of securing critical infrastructure by developing a process anchored around evidence-based design principles. The proposed process is expected to enable learning across critical infrastructures, improve the way risks to critical infrastructure are communicated, and improve the quality of the responses to citizens' demands for their governments to collect, validate, and disseminate reliable information on cybersecurity risks to critical infrastructures. These results will be of interest to the general public, vulnerable populations, owners and operators of critical infrastructures, and various levels of governments worldwide. I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I'll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be. Steven Muegge and Dan Craigen ing -for each critical infrastructure to learn from the relative strengths of others to improve their own relative weaknesses -there is little evidence that this learning actually occurs in practice. Perhaps more importantly, knowledge production across critical infrastructures has thus far been limited. We have growing "knowledge silos" about securing particular infrastructures, but only a small body of knowledge that generalizes across infrastructures. To better protect critical infrastructures against evolving cybersecurity threats, we need more learning between infrastructures and more knowledge production across infrastructures. Critical infrastructures are "design artifacts" that are created by people. Thus, securing critical infrastructures against cyber-attacks is, at least in part, a design problem. There is a well-developed scholarly literature and a body of practical knowledge about design. By reformulating critical infrastructure protection as a design problem, we offer an alternative perspective that complements the technical, policy, law enforcement, and national defence perspectives that are prevalent in current discourse. We propose that the design science notion of design principles could provide a partial remedy to today's problems by enabling learning between different infrastructures and enabling new knowledge production across infrastructures. Our solution takes the form of a design process anchored around evidence-based design principles for secure critical infrastructures. The proposed process is a "learning machine" in which design principles provide a focal point for collaboration between infrastructures, codify specialized knowledge in a teachable form that can be more easily communicated to others, elevate attention from point solutions to higher-impact problems, enable knowledge sharing between different infrastructures, and increase both the rate of learning and the frequency of opportunities for learning. The article proceeds as follows. The first section develops a design science perspective on secure critical infrastructures. The second section presents a five-step evidence-based design process anchored around design principles. The next two sections illustrate the systematic application of this "learning machine" process by reviewing the lessons learned from theory and practice, and developing a set of seven evidence-based design principles, respectively. The second-to-last section discusses the contribution, and the final section concludes the article. A Design Science Perspective Design can be defined as the process of inventing objects that perform specific functions (Baldwin & Clark, 2000). In this definition, inventing is something different from merely selecting between available alternatives: "A problem only calls for design (in the widest sense of that word) when selection cannot be used to solve it" (Alexander, 1964). The notion of "objects" should be interpreted broadly: engineering objects can be designed, but so can organizations, markets, economies, and larger social systems. The serious scholarly study of design originated in the 1960s with early writing and talks by R. Buckminster Fuller (1963), Christopher Alexander (1964), Sydney Gregory (1966), Herbert Simon (1969) and others, and continues to this day. Simon (1996) defines a science of design as "a body of intellectually tough, analytic, partly formalizable, partly empirical, teachable doctrine about the design process" -thus explicitly excluding ideas that are "intellectually soft, intuitive, informal, and cookbooky". Scholars in this domain argue that design science has its own distinct body of knowledge for designing solutions to human problems: • According to van Aken (2004), design science is distinct from both the formal sciences, such as philosophy and mathematics, that build systems of logical propositions, and the explanatory sciences, such as physics and sociology, that aim to describe, explain, and predict observable phenomena within a field. • According to Simon (1996), design science is distinct from both the natural sciences and the social sciences that try to understand reality. • Van Aken (2004) further argues that design science is distinct from applied science, which more narrowly implies the application of research outcomes from the explanatory sciences. At least three recurring themes from design science scholarship are salient here: 1. When properly expressed, design knowledge is teachable. It can be (partly) captured in an expressive form, and conveyed from one designer to another, or passed down from an experienced senior designer to an apprentice. www.timreview.ca A Design Science Approach to Constructing Critical Infrastructure and Communicating Cybersecurity Risks Steven Muegge and Dan Craigen 2. A subset of design knowledge is connected only with particular problem spaces; other design knowledge is more broadly applicable to categories or families of problem spaces. Consistent with the design science literature, we label the first (more narrow) subset of codified design knowledge as design rules, and the second (more broadly applicable) subset of codified design knowledge as design principles. 3. It is possible to move between these levels of abstraction -to sometimes "abstract up" from narrow design rules to broader design principles, or to "ground" design principles in the specific context and objective of the problem at hand to formulate solution-oriented and context-specific design rules that lead to specific actions. This mechanics of this process are only partly understood; this continues to be an active area of ongoing research for design science scholars (Denyer et al., 2008;Kauremma, 2009). These three themes imply that design knowledgewhen properly expressed as design principles and design rules -can improve over time through cycles of explanation and experimentation that resemble the theory-building and theory-testing cycles of the scientific method. Romme and Endenburg (2006) previously proposed a five-step cyclical design process that makes explicit all of these themes and ideas, including the notion of design principles. Although the authors had originally focused on the specific problem of organization design (Dunar & Starbuck, 2006;Jelinek et al., 2008), other researchers have found the process to be both adaptable and extensible. For example, McPhee (2012a) introduced refinements for performance management and for linking design principles to specific actions, and proposed a results-based organization design process for technology entrepreneurs. McPhee (2012b) then employed the process to design the organization that today produces and disseminates the Technology Innovation Management Review. Others have adapted the design science process to a diverse range of artifacts; some of the more novel examples include: i) design of policy to foster technology entrepreneurship in a region (Gilsing et al., 2010), ii) heavy construction projects (Voordijk, 2011), iii) corporate ventures (Burg et al., 2012), iv) public participation processes (Bryson et al., 2013), and v) a knowledge management portal (Pascal et al., 2013). Continuing on this path, we adapt the Romme and Endenburg (2006) process and the lessons learned from design science scholarship to the problem of designing secure critical infrastructures. Process to Construct Critical Infrastructure and Communicate Cybersecurity Risks A design science process for designing secure critical infrastructures has the following five steps: 1. Gather lessons learned from theory and practice This step captures "the cumulative body of key concepts, theories, and experientially verified relationships" (Romme & Endenburg, 2006) that are useful for explaining secure critical infrastructures. The source material thus includes the body of knowledge about critical infrastructures and the body of knowledge about cybersecurity. It includes published research on related phenomena -from the natural sciences and engineering of physical systems and software, from the social sciences on human behaviour and the economics of organizations, and from what Craigen (2014) calls the nascent and slowly emerging science of cybersecurity. It also includes practitioner knowledge obtained from people working in field settings. Practitioner knowledge can also be evidence-based (Van de Ven, 2007), but it is more tentative and of uncertain validityperhaps obtained from a small non-representative sample or even a rare or unique event that is unlikely to repeat, and is necessarily filtered through human experience. Yet, it is essential to the problem at hand, where cybersecurity research is at a very early stage and the current body of knowledge is largely atheoretical (Craigen et al., 2013;Craigen, 2014). Both forms of source material are distilled together into key insights -the "lessons learned" from theory and practice -that are propositional and probabilistic in nature. Formulate design principles This step develops a coherent set of imperative propositions grounded in the lessons learned from theory and practice. Design principles are prescriptive in logical form (van Aken, 2004): "if you want to achieve Y in situation Z, them perform action X". Some prescriptions are algorithmic and precise, like a recipe, in a quantitative format that is thoroughly specified. Others are heuristic, in the form of a design exemplar, and are partly indeterminate: "if you want to achieve Y in situation Z, then something like action X will help". Design principles are sufficiently general that they could be used by others faced with similar design challenges ( (Muegge, 2013); and sustainable open source software projects (Schweik, 2013). For our purposes, the objective to be achieved is secure critical infrastructures that are protected from cybersecurity threats; thus, the design principles of interest here should capture the situation-contingent design actions to achieve this result. Formulate design rules This step produces detailed guidelines that are specific to the design context and are grounded in one or more design principles. "These rules serve as the instrumental bases for design work" (Romme & Endenburg, 2006). Unlike design principles, design rules may be densely interconnected, and are most effective when applied as sets in combination with other design rules. Thus, design rules are tightly bound to the specific circumstances of a particular problem space. For our purposes, the salient circumstances are likely to include the characteristics of the infrastructure, the performance expectations of the provider and other stakeholders, and the ever-changing threat landscape. Design This step applies the design rules to create a design representation. Components of a design representation could include physical drawings, mathematical models, software representations, specifications using frameworks, narratives, and other formats (Simon, 1996). The outcome is a "blueprint" that can be followed to construct an artifact that implements the design. Implementation and experimentation This step constructs a design artifact that implements the design. The artifact can tested and modified. Romme and Endenburg (2006) write: "The science-based design cycle is completed, by observing, analyzing, and interpreting the processes and outcomes generated by the design, and where necessary, adapting existing organization theories or building new theory. In addition, experiences and observations regarding implementation and experimentation may lead participants to rethink the design as well as the rules and principles used." Behavioural research suggests that expert designers naturally follow a progression from conceptual principles to design action (Newell & Simon, 1972;Simon, 1996), but often do so internally and automatically, without making explicit the lessons learned (step 1) or attending closely to design principles (step 2). Expert designers instead hold these ideas in tacit "mental models" (Peffers et al., 2008) that may be difficult to codify and explain to others (Senge, 1990). The contribution here is making explicit the different activities at each step and the different outputs of each step. Attending deliberately to lessons learned, design principles and design rules can improve performance (Romme & Endenburg, 2008): "If those engaging in a design project develop some awareness of construction principles used, their learning capability as well as the effectiveness of their actions in the project tends to increase". More importantly for the objective of this article, design knowledge is captured in an explicit form that can be explained, shared, challenged, and tested more easily than the tacit design knowledge that is locked up in designer mental models. The next two sections illustrate the application of the first two steps of this process to propose an initial set of design principles that cross all critical infrastructures. Step 1: Lessons Learned from Theory and Practice Step one of the design process requires that we gather insights from theory and practice that will guide our design principles in step two. The lessons learned about critical infrastructures originated from three types of source material: i) the published literature, ii) discourse with experienced practitioners, and iii) insights from a set of graduate student research projects. All three sources were associated with a graduate course offered in the Technology Innovation Management (TIM; timprogram.ca) program at Carleton University in the Winter term of 2015 (January to April) on the topic of critical infrastructures and cybersecurity. The authors of this article designed and delivered the course. Lessons from examining the published literature The first set of insights emerged from a review of the salient literature, including peer-reviewed journal articles, conference papers, government reports and policy documents, publications from providers of critical infrastructures, and articles in national and international newspapers and magazines. We began with a "recommended reading list" of 35 documents about critical infrastructures selected by the authors and provided to students at the beginning of the course. We added approximately 30 additional sources recommenwww.timreview.ca Steven Muegge and Dan Craigen ded by graduate students that were discovered during the students' coursework and research projects, and approximately 10 additional sources recommended by guest speakers. Our source material also included the 33 articles about cybersecurity previously published in the Technology Innovation Management Review in the July 2013, August 2013, October 2014, November 2014, January 2015, and April 2015 issues on cybersecurity, including the 15 articles reprinted in Cybersecurity: Best of TIM Review (Craigen & Gedeon, 2015). We identified seven key insights from the literature and provide examples of sources supporting each insight: 1. Critical infrastructures are of high value to society (Gorman, 2009;Langner, 2011) 2. Critical infrastructures are highly complex and increasingly interconnected (Clemente, 2013;Penderson et al., 2006;Rinaldi et al., 2001) 3. Critical infrastructures differ in important ways from other categories of information systems; for example, critical infrastructure systems may operate for decades with minimal updates (Hurst et al., 2014) 4. Critical infrastructures are constantly under attacksometimes successfully (Jackson, 2011;Miller & Rowe, 2012) 5. Sophisticated attacks are multifaceted, with multiple stages and components (Langner, 2011;Verizon, 2015) 6. Responses to attacks are not always effective; some analysts blame a shortage of knowledge, skills, and qualified security professionals (CSIS, 2010) 7. Knowledge of cybersecurity is atheoretical (Craigen, 2014;Craigen & Gedeon, 2015;Singh, 2014) Lessons from discourse with practitioners The second set of insights emerged from presentations and interactive dialogues with twelve expert guest speakers from six different critical infrastructure sectors: finance, government, mining, nuclear power, policing, and telecommunications. The experts held job titles such as Chief Information Officer (CIO), Chief Strategist, Superintendent, Vice-President, Director, Manager, and Senior Technical Architect. Each expert provided a presentation, followed by questions and interactive discussion with teaching faculty, graduate students, and invited guests, with a total duration ranging from approximately ninety minutes to three hours. The general charter given to experts was to respond to the question "What challenges keep you up at night?" From these dialogues, we identified nine new key insights: 1. In the sectors we examined, cybersecurity is not a competitive differentiator. For example, banks in the Canadian banking industry all offer comparable security; they do not currently compete for customers on the basis of which bank is more secure than its rivals. In the technical language of stakeholder value propositions , cybersecurity is most often a point of parity, not a point of difference. 2. There are significant cultural differences between critical infrastructure sectors. For example, the financial sector takes a risk management approach to security, whereas the nuclear industry response is grounded in physical security. In some sectors, cybersecurity is aligned with operational requirements; in other sectors, cybersecurity is not aligned with operational requirements. 3. Critical infrastructures are impacted by massive ongoing changes to cyberspace, including: i) trends towards virtualization, commoditization and open source, ii) the Balkanization of cyberspace, iii) new potential attack vectors (e.g., growth of mobile devices), and iv) shifts in supply chains. 4. Standards compliance is a major challenge from multiple perspectives, including technical, financial, and organizational competency. 5. Experts voiced concerns with a diverse assortment of challenges, including: i) the weakest link being the human (often due to psychological manipulation), ii) trusting a supply chain that has become global in scope, and iii) the inability of cybersecurity defences to keep pace with the wherewithal, agility, entrepreneurship, and bricolage of the adversary. Lessons from graduate student assignments The third set of insights emerged from graduate student course assignments. A total of 41 students formed 16 assignment groups that each delivered three course assignments (one presentation, one document that proposed a solution to management problem, and one document that developed a contribution to theory). Students were expected to examine the documents on the recommended reading list, engage with the expert guest speakers, and perform their own independent reviews of the published literature. The course assignments required significant analysis of published work, as well as synthesis of new results (Alvesson & Sandberg, 2011;Le Pine & Wilcox King, 2010) and evaluation and judgment to develop actionable recommendations and effectively communicate those recommendations to others. Two of the articles in this issue of the Technology Innovation Management Review were developed from these assignments (Payette et al., 2015;Tanev et al., 2015), and we expect more publications in the future. The graduate students varied widely in demographics, including a mix of mid-career and early-career work experience, of working professionals and full-time students, and of careers in the security domain and in other areas. From these assignments, we identified five new insights: 1. Accountability for cybersecurity is often unclear. For example, cybersecurity is currently under-addressed in IT service-level agreements (SLAs). When something goes wrong, each group can blame others. 2. The effective assessment and communication of cybersecurity risks should take a "wide lens" perspective on the network, supply chain, and surrounding ecosystem (e.g., Muegge, 2013;Tanev et al., 2015). A product-centric focus is inadequate. 3. Maturity models are a promising and under-utilized approach to assessing capabilities and adoption of best practices. These models can take the form of cybersecurity capability maturity models (e.g., or explicitly including cybersecurity in existing capability assessments (e.g., Payette et al., 2015). 4. Theories and frameworks from other domains, such as entrepreneurship, innovation, criminology, economics, and psychology, can provide alternative perspectives on critical infrastructure design and cybersecurity risk. For example, theories of technology adoption could provide perspective on experts' concerns regarding the limited adoption of known best practices. 5. Formal models of IT security are improving (e.g., Craigen et al., 2013;Cybenko, 2014;, but more work is needed for critical infrastructures. For example, accurate forecasts of mean-time-to-compromise of long-lived distributed industrial control systems would require new extensions to current models, including new theory and new empirical work. Step 2: Design Principles for Secure Critical Infrastructures Step two of the design process requires that we formulate a coherent set of prescriptive and propositional design principles that are anchored in the lessons learned from theory and practice. Each of our seven design principles shares the same desired outcome: a secure critical infrastructure. The seven design principles are as follows: (Craigen, 2014;Craigen & Gedeon, 2015;Singh, 2014), and consequently, our responses to cyber-attacks are, at best, sub-optimal. A design science approach anchored around explicit design principles provides a way of learning from practice. From practice, we make observations and induce propositions, which can lead to predictive and testable theories. From theories, we can deduce principles and rules and thereby better inform providers of critical infrastructure and cybersecurity stakeholders on how to effectively and efficiently design for and respond to cyber-attacks and how to communicate cybersecurity risks. Monitor the entire supply chain The business enterprises that provide products and services to critical infrastructure providers do not and cannot exist in isolation. Each of these organizations has their own suppliers, customers, and partners, and each of those organizations has its own network of relationships. Supply chains are increasingly global in scope, and highly complex. They increasingly include open source software and other community-developed assets that are not owned or controlled by a traditional supplier. Failure to properly manage the supply chain can result in malicious or poor-quality products being incorporated into a critical infrastructure, with potentially dire consequences. A broader perspective on supply chain risk and managing the entire "innovation ecosystem" is what Adner (2012) calls "seeing with a wide lens" (q.v., Tanev et al., 2015). Assign accountability Today, many cyberspace warranties are weak with regards to accountability. This weakness can be partly explained by technical limitations, for example, the challenges in measuring and verifying cybersecurity compliance, and partly by risk aversion, avoidance, and transference by stakeholders. Whether by regulation or exercise of customer market power, it is imperative that enterprises, in general, and critical infrastructures, in particular, take ownership of cybersecurity challenges and become accountable for their postures. Know your adversaries Researchers are learning more about cyber-attacks and cyber-attackers (e.g., Kadivar, 2014;, including the entities behind prominent attacks, their motivations, their tools and technologies, and the complex innovation ecosystems that produce attacker tools and technologies. Knowledge about adversaries enables designers of critical infrastructures to make better decisions about cybersecurity defences and enables a broader range of responses to threats. Perhaps infrastructure providers can demotivate attackers by removing a political raison d'être or reducing monetization opportunities, or perhaps they can disrupt the attacker's supply chain by attacking the malware market within which the botnet masters and attackers reside. Collaborate around common interests Cybersecurity is not a challenge faced alone by a critical infrastructure provider. The consequences of compromised security and service interruptions impact individuals, enterprises, economies, and societies. Academia, government, and business each have a role to play, and can invest together around common interests. For example, providers of critical infrastructures can benefit from platforms, community innovations, and participation in business ecosystems in many of the same ways in which entrepreneurs and other organizations benefit (Muegge, 2013). Open source software projects are a high-potential setting for collaboration; critical infrastructure providers tap into the benefits of high-quality software, and other developers and users benefit from the critical infrastructure providers' high demands for security and testing. Design principles can anchor these collaborations and enable learning. Design for resilience Resilience, broadly speaking, refers to the ability to recover from or adjust easily to misfortune or change (Merriam-Webster, 2015). In the context of information systems, Smith and colleagues (2011) define network resilience as the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation. As the safety community has long understood, single points of failure must be avoided by design. Critical systems must be diverse, resilient, and resistant. Subsystems must be redundant and sandboxed, so that critical infrastructures can tolerate failed or compromised components. Designing for system resilience brings together operational and cybersecurity objectives; protecting critical infrastructures against evolving cybersecurity threats thus becomes an enabler -a necessary condition for achieving operational objectives. Design within a strong culture of cybersecurity Culture refers here to "a fairly stable set of taken-forgranted assumptions, shared beliefs, meanings, and valwww.timreview.ca Steven Muegge and Dan Craigen ues that form a kind of backdrop for action" (Smirchish, 1985). According to Schein (1993), the shared assumptions that are embedded in a strong organizational culture are quickly picked up by new members as "the correct way to perceive, think, and feel". A strong culture of cybersecurity thus refers to an organizational culture in which cybersecurity is deemed normal, where security is expected and valued, and where the negative consequences of compromised security are perceived as abnormal, anomalous, and repugnant, or "not the way things are done around here". For example, groups and individuals would practice safe computing and would expect others to do so. IT systems would be promptly patched, and secure best practices would be the norm. Thus, the seventh design principle brings together the first six design principles and institutionalizes them as "the correct way to perceive, think, and feel." Contribution Design science is increasingly applied in the domains of information systems (Hevner et al., 2004;Peffers et al., 2008;Pries-Hehi & Baskerville, 2008) and organization design (Dunbar & Starbuck, 2006;Jelinek et al., 2008;McPhee, 2012b), and a wide array of novel applications including policy design (Gilsing et al., 2010) and process design (Bryson et al., 2013). By developing and applying a design science perspective on secure critical infrastructures, we offer three contributions: 1. We adapt prior work by Romme & Endenburg (2006) to propose a five-step critical infrastructure design process anchored around the creation and application of design principles. 2. We propose a set of seven critical infrastructure design principles that are grounded in theory and evidence. 3. We illustrate the application of the critical infrastructure design process by developing our initial set of seven design principles from the lessons learned from theory and practice. Others can take this process forward to the next steps by formulating contextspecific design rules for particular problem spaces by taking into account the target infrastructure and expected threats. We argue that a design science approach that is anchored in explicit and well-formulated design principles would offer three important benefits: 1. Design principles enable knowledge sharing between infrastructures. Design knowledge expressed as design principles is teachable, actionable, and testable. 2. Design principles enable knowledge production across infrastructures. Explicit and deliberate attention to design principles elevates the focus of knowledge production and capture from the "sticky" knowledge of domain-specific problems to broader categories of knowledge about critical infrastructures and cybersecurity risks. 3. Design principles can play a central role in the theory-building process. Ideally, design principles would follow from strong theory (Romme & Endenburg, 2006). However, because the current body of knowledge about cybersecurity is largely atheoretical (Craigen et al., 2013;Craigen, 2014), design principles for the foreseeable future are likely to be grounded mainly in practitioner experience rather than strong theory. With a strong set of explicit and well-formulated design principles, researchers could alternate between inductive and deductive cycles of theory-building (Christensen & Raynor, 2003), first generating tentative theoretical explanations that could account for the design principles, then devising empirical tests to distinguish between rival explanations. Each of the seven initial design principles suggests questions for future research on securing critical infrastructures. First, we need more research on the design process itself, on how to more effectively accomplish each of the steps, and how to transition between stepsfor example, on how specifically to formulate contextspecific design rules that are anchored in a coherent set of design principles. Second, we need a better understanding of how to secure complex global supply chains, and how to estimate, communicate, and manage supply chain risk. Third, we need to better understand accountability for cybersecurity, especially regarding shared and open source assets, and from providers of goods and services for which cybersecurity has not previously been a primary concern. Fourth, we need more information and more timely information about the adversaries of critical infrastructures -their motivations, capabilities, technologies, activities, and business models, and how their operations could be disrupted. Fifth, we need better ways to motivate collective action around shared interests and effectively collaborate. Sixth, we need systems that are more resili- Steven Muegge and Dan Craigen ent and can continue operating even as specific subsystems fail or are compromised. Seventh, we need cybersecurity to become culturally-embedded in more activities by more stakeholders. As our initial design principles are refined and new design principles are developed and added, we expect the number of interesting and high-impact research questions and problems to grow. Conclusion The ongoing success of cyber-attackers and the growing criticism of how cybersecurity risk is communicated is a condemnation of current practice. We confront these problems by developing a design science perspective on secure critical infrastructures, proposing a five-step design process anchored around evidencebased design principles, and demonstrating our "learning machine" approach by gathering lessons learned about critical infrastructures from theory and practice and formulating a set of seven evidence-based design principles. Our principles are not definitive; rather, they are a starting position to be improved by others. The continued progress of scholarly research, the inclusion of more research results and more practitioner literature, the addition of more experts with field experience in a broader range of infrastructures, and further iteration through the cycles of the design process are all expected to sharpen and refine the starting list of seven principles. We call upon and challenge our readers to apply and extend this work. Introduction Concerns over the state of medical device cybersecurity have become a topic of intense public discussion after cases such as the hacking of connected insulin pumps by researchers to deliberately deliver lethal insulin doses (Healey et al., 2015). Following these cases and similar others, the United States Department of Homeland Security began investigating two dozen medical devices for potential security vulnerabilities and the Food and Drug Administration released guidance to manufacturers for establishing cybersecurity management strategies for their medical devices (FDA, 2014). Experts have come forward stating that the medical device industry is significantly behind other industries in terms of its ability to both articulate and address cybersecurity issues (Fu & Blum, 2014). Also, with networked medical devices increasingly joining the Internet of Things, security will take a much more prominent role as risks to patient health, safety, and data privacy continue to grow (Wirth, 2011). Between 2013 and 2014, the increase in information security breaches for healthcare facilities was almost double that of other industries (Harries, 2014), and with networked devices moving from hospital networks to home networks, new threats are bound to emerge. With public and regulatory pressure rising, manufacturers are spending more time, effort, and resources on im-Cybersecurity for networked medical devices has been usually "bolted on" by manufacturers at the end of the design cycle, rather than integrated as a key factor of the product development and value creation process. The recently released cybersecurity guidelines by the United States Food and Drug Administration (FDA) offer an opportunity for manufacturers to find a way of positioning cybersecurity as part of front-end design, value creation, and market differentiation. However, the technological architecture and the functionality of such devices require an ecosystem approach to the value creation process. Thus, the present article adopts an ecosystem approach to including cybersecurity as part of their value proposition. It extends the value blueprint approach suggested by Ron Adner to include an additional dimension that offers the opportunity to define: the potential locations of cybersecurity issues within the ecosystem, the specific nature of these issues, the players that should be responsible for addressing them, as well as a way to articulate the added cybersecurity value as a competitive differentiator to potential customers. The value of the additional blueprint dimension is demonstrated through a case study of a representative networked medical device -a connected insulin pump and continuous glucose monitor. When the value proposition requires multiple elements to converge, you need an approach that will allow you to assess alternative configurations and generate shared understanding and agreement among the partners as to how these elements should come together. … Left unarticulated, contradicting visions don't conflict until after commitments are made and pieces are brought together. But when the strategy meets reality, details become disasters. Ron Adner Professor of Strategy and Entrepreneurship In The Wide Lens " " www.timreview.ca A Value Blueprint Approach to Cybersecurity in Networked Medical Devices George Tanev, Peyo Tzolov, and Rollins Apiafi proving cybersecurity. At the same time, the existing ways of articulating customer value in the medical device industry do not seem to allow for a differentiation in terms of cybersecurity benefits. These growing cybersecurity concerns and the lack of cybersecurity benefit-articulation highlight the growing need for manufacturers to begin utilizing security as a market value and differentiator. One of the main criticisms of medical device cybersecurity is that security tends to be added on at the end of the development process, instead of being "baked in" from the start as part of the design phase (Shah, 2015). This late consideration highlights a key problem in the way many manufacturers approach security. Security is perceived as a hurdle to jump over, rather than a key part of the value proposition that can be used as a market differentiator. With an estimated unit sale of networked medical devices to increase by five times from 2012 to 2018 (Healey et al., 2015), increased security efforts are becoming a necessity. These additional efforts provide an opportunity for manufacturers to add value and differentiate themselves in such a growingly competitive market. Networked medical device are predominately softwarebased medical devices that are connected to networks involving patients, healthcare organizations, medical specialists, and other service providers. In most of the cases, their operation requires wireless connectivity and multiple interoperations including the sharing of clinical information and controlling other medical devices and systems as well as nonmedical equipment (e.g., routers and servers) and software. Complex networked systems, including medical devices, have now become common, and with this added sophistication, new behaviours and unexpected consequences have begun to appear that are outside the control of the medical device manufacturer (Rakitin, 2009). A report by the Atlantic Council assessing the benefits and risks of healthcare systems in the Internet of Things identifies four main types of networked medical devices (Healey et al., 2015): 1. Embedded devices (e.g., pacemakers) 2. External devices (e.g., insulin pumps) 3. Stationary devices (e.g., networked infusion pumps) 4. Consumer products for health monitoring (e.g., FitBit or Nike Fuel band) Consumer products for health monitoring are sometimes not discussed with medical devices because they do not require regulatory approval (i.e., they do not fit the definition of a medical device in most regions), but the regulatory framework around them has been under intensive discussion and is likely to change in the coming years (Healey et al., 2015). We will therefore include them as part of our discussion. The rest of the article is organized as follows. We will next describe the specifics of cybersecurity issues in the medical device sector. Then, we will summarize the key points of the value blueprint approach and suggest an additional dimension that addresses cybersecurity issues. The next section contains an application of the cybersecurity blueprinting approach to a specific case consisting of a connected insulin pump and continuous glucose monitor. Finally, we conclude by articulating the key contributions of the article and offering suggestions for future research. Cybersecurity for Medical Devices Cybersecurity for medical devices has traditionally been seen as a tradeoff to usability, and therefore as a potential challenge for market value. Even the FDA emphasizes that improved security should be counter-balanced against reduced usability (FDA, 2014). This tradeoff is true in certain cases, but an overemphasis would lead to missing the opportunity to articulate security as add-on value. For example, securing an insulin pump with a password for daily tasks is cumbersome and patients will most likely use a simple password or find a way around it. In another example, encrypting wireless communication of a pacemaker would improve security while also adding value to the patients because they would be safe from malicious threats. With the medical device market already being highly competitive, not articulating security improvements as an add-on value to the patient is a missed opportunity. In order to articulate the created cybersecurity value, manufacturers of networked medical devices must first change the way they look at the security landscape. Networked medical devices should be seen as a platform in a diverse ecosystem of stakeholders (Shah, 2015), which is similar to mobile communication platforms in the automotive industry. The ecosystem depends on numerous software and hardware systems, some of which have been developed by suppliers and must be integrated using "glue code" so that they can function together (Amin et al., 2015). The integration increases the www.timreview.ca A Value Blueprint Approach to Cybersecurity in Networked Medical Devices George Tanev, Peyo Tzolov, and Rollins Apiafi chances of introducing cybersecurity vulnerabilities at the interfaces between the different software and electronics systems. The glue code problem can be framed as a knowledge coordination problem between manufacturers and suppliers of networked medical devices. For example, a portable heart monitor communicates to a mobile device, which displays relevant health data and also uploads it to a server for additional post-processing and analytics. Thus, vulnerabilities could be at another location in the ecosystem and not in the device itself, which requires a high degree of knowledge coordination between manufacturers, suppliers, co-innovators, and adoption chain partners. To highlight security as part of the value proposition, we must move from a product-centric approach to an ecosystem-driven approach to security. This approach would allow manufacturers to: 1. Identify key stakeholders in the ecosystem together with all associated cybersecurity vulnerabilities. 2. Create a plan to address the highest risk cybersecurity vulnerabilities in collaboration with stakeholders. 3. Articulate the value dimensions associated with the security efforts to the relevant stakeholders. 4. Improve security by innovating the ecosystem. This article aims to address these points by adapting a value blueprint approach to cybersecurity. A Value Blueprint Approach to Cybersecurity The value blueprint approach proposed by Ron Adner in his book The Wide Lens takes an ecosystem approach to value creation. Translating a specific value proposition into a value blueprint makes it possible to identify and visualize the multiple dependencies within the ecosystem as well as deal with situations where multiple elements need to converge and a shared understanding between stakeholders is required. Adner suggests an approach to value blueprint development including the following steps: 1. Identify your end customer. 2. Identify your own project. Identify your complementors. 6. Identify the risks in your ecosystem (Red=Unmitigable risk; Yellow=Mitigable risk; Green=Acceptable risk): a. Level of co-innovation risk b. Level of adoption risk 7. For every partner whose status is not green, understand the problem and suggest a viable solution. 8. Update blueprint on a regular basis. The risk levels in Adner's blueprint follow a green, yellow and red "traffic light" approach. It focuses solely on the interplay between co-innovation and adoption chain risks in managing value creation and articulating the market value of the product. For co-innovation risk, green means that the stakeholder is ready and in place, yellow means that they are in place, but do there is no plan, and red means that they are not in place. For adoption risk, green means that partners are eager to participate and see the benefit of their involvement, yellow means that partners are neutral but open to involvement, and red means that they prefer the status quo and are not willing to be involved. A red light would indicate that more substantial changes need to be made in the blueprint, such as a change in partners. The blueprint could be used however to analyze an additional dimensions of value, and in particular, the value of cybersecurity in networked medical devices. In this way, a blueprint would allow for an explicit analysis of security vulnerabilities from an ecosystem perspective. It would also allow for using all value blueprint tools focusing on evolving the ecosystem to enhance the security of networked medical devices, as well as for articulating the newly created cybersecurity value for a better market differentiation. The cybersecurity blueprint can be generated by the process proposed by Adner, with minor changes in the way of approaching risks in the ecosystem. For the sake of simplicity, we will assume that all other aspects of value for all stakeholders have been already articulated, and that the risk we are assessing in our value blueprint is strictly cybersecurity risk. This assumption requires some changes to Adner's steps, mostly after step 5. The steps for developing the cybersecurity blueprint for a networked medical device are as follows: A Value Blueprint Approach to Cybersecurity in Networked Medical Devices George Tanev, Peyo Tzolov, and Rollins Apiafi 1. Identifying your end customer, your own project, your suppliers, your intermediaries, complementors together with their specific cybersecurity concerns, if any (steps 1-5 in Adner's approach). 2. Identify the locations of security risks in your ecosystem by taking into account any concerns that were explicitly articulated by the different stakeholders (Red=Unmitigable risk; Yellow=Mitigable risk; Green=Acceptable risk). 3. For every location in the blueprint understand the coinnovation (i.e., technical) and adoption aspects of the problems and prioritize them by using an appropriate cybersecurity risk-analysis framework into green (acceptable), yellow (mitigable), and red (unmitigable) risks levels. 4. Develop a risk management action plan to address the highest priority risks (yellow and red) with a viable security risk mitigation measure to make the risk level acceptable (green) and add it to the blueprint as appropriate. 5. Use the cybersecurity blueprint to articulate the value created by your efforts and the next steps in your cybersecurity management plan in a way that you could differentiate in the marketplace. 6. Update and innovate the cybersecurity blueprint on a regular basis. The changes would allow for the localization of cybersecurity risks within the ecosystem, subsequently taking adequate action to mitigate the risk, and using the blueprint to articulate the security efforts and the value added. As in Adner's blueprint, the levels of risk are represented by red (does not allow for delivery of end value), yellow (requires additional efforts to mitigate risk) or green (does not require additional efforts). The adoption of a meaningful risk analysis method is crucial for the implementation of the cybersecurity blueprint approach. Even though it is out of the scope of the present article, we could mention some points regarding the application of risk analysis methods as part of an ecosystem cybersecurity approach for networked medical devices. First, known risk analysis methods such as Failure Mode and Effect Analysis (FMEA), or Health FMEA (HFMEA) (Shaqdan et al., 2014) do not seem to grasp the full scope of the cybersecurity risks that can be addressed in our ecosystem approach. Approaches based on FMEA-type risk analysis typically address risks due to design failures rather than to malicious attacks. Cybersecurity risk analysis in an ecosystem context needs to address issues associated with intentional malicious agents attacking or interfering with networked medical devices. Secondly, the risk analysis for networked medical devices should focus on the cyber-resilience of the ecosystem, or in other words, the ability to withstand cyber-events or cyber-attacks. Cyber-resilience risks in the context of networked medical devices relate to the control of access, the quality/validity of information, and to the continuity of operation (Boyes, 2015). Risks must also be analyzed within the context of the full lifecycle of networked medical devices and with respect to all relevant stakeholders. In other words, what are the risks related to cases of future, unforeseen cyber-vulnerabilities such as the case of the Heartbleed incident (Krebs, 2014). What is important to point out is the need to move beyond two-dimensional definitions of risk (i.e., probability of harm occurring and severity of the harm once it occurs), which might oversimplify the ability of a medical device company to proactively manage cybersecurity and cyber-resilience risks. Thirdly, the product benefit or utility should be also added to the risk score as a relevant factor. Its addition could provide a higher degree of sophistication of the cybersecurity risk management logic. For example, a risk that remains unacceptable after performing all practicable cybersecurity mitigation measures may actually be tolerable if the device's clinical benefit or medical significance outweighs its residual risks. The next section offers an example case of the application of the value blueprint approach to the analysis of the cybersecurity issues associated with Animas insulin pumps. Case Study: The Animas Vibe Insulin Pump Cybersecurity Value Blueprint The described cybersecurity value blueprint was hypothetically applied from the perspective of the manufacturer of the already marketed Animas Vibe Insulin Pump (tinyurl.com/pavb3lp). The Animas insulin pump is used with the G4 PLATINUM Continuous Glucose Monitor made by DEXCOM (tinyurl.com/qda8x5x). The added value of security for the insulin pump has yet to be articulated by manufacturers. In most of the marketing materials, there is little mention of the security of the device, even though the vulnerabilities of insulin pump security have been extensively documented by researchers and presented in the media. The cybersecurity value blueprint would clearly articulate the ecosystem efforts made for improving cybersecurity and provide an additional opportunity for market differentiation. www.timreview.ca George Tanev, Peyo Tzolov, and Rollins Apiafi The Animas insulin pump is an example of the direction towards connected and personal medical devices, which are gaining platform-like properties as they are integrated with other devices and services. The insulin pump is not directly connected to a network, but is connected wirelessly to the glucose monitor, and can transfer data to a healthcare professional via the diasend web service (www.diasend.com/us/) by connecting the pump to a computer via USB or infrared connection. Future networked medical devices will send data to cloud web services wirelessly. To begin building the cybersecurity blueprint, we first need to establish all of the key elements of the ecosystem. This process is addressed in the first five steps for generating the blueprint. The elements are listed in Table 1. Following step 2, the cybersecurity blueprint for the Animas insulin pump was generated, as represented in Figure 1. The security concerns that are highlighted in Figure 1 are graded at the level of "yellow risk" and therefore should be mitigated. The concerns are described below with potential mitigations that could be implemented and their added value reflected in the blueprint: 1. Cybersecurity management practices of the insulin pump manufacturer: The manufacturer has to follow a process for assessing and addressing security risks within the device. Mitigation: Implementing a cybersecurity management strategy and an open disclosure policy for device security vulnerabilities that have been found by external parties. Cybersecurity management practices of the continuous glucose monitor manufacturer: The manufacturer of the Animas pump has limited power over the cybersecurity management practices of their partner device manufacturer. They can assess and address any security issues in the integration process of the two devices. Mitigation: None -To be addressed at other locations in the blueprint. 3. Security implications in the integration of the two devices: Combining two individual products into a package raises potential security concerns because security for the integrated product was not planned in the initial design process. Mitigation: A third-party firm can be utilized for security tests of the integrated product. This approach can also address vulnerability number 3 from Figure 1. Regulatory requirements and recommendations of cybersecurity: The requirements that are set forth by the regulatory body in the region where the product is marketed are relevant for licensing the device. In many regions, there are still no explicit regulatory requirements for cybersecurity. Mitigation: Many of the mitigation steps that are taken for the other vulnerabilities ensure that the manufacturer is not simply fulfilling the bare minimum regulatory requirements, but taking a proactive approach to cybersecurity. 5. The role and impact of medical professionals on device security: Medical professionals will most likely play an instructional role with patients and have access to sensitive patient data through web services. It is important that medical professionals are security conscious when dealing with networked devices. Mitigation: Training or instructions of good security practices with the device and accessing patient data. 6. The role and impact of patients/users on device security: The way that patients operate the device could also risk its security. It is important that patients 7. Transferring data between patients and medical professionals over the Internet: Data that is transmitted from the insulin pump to a computer to upload data to the patient's physician could be susceptible to unauthorized access of the patient's health information. The data can currently be transferred by USB or by infrared data transfer. Mitigation: The manufacturer has already made a good choice in using diasend web services that specialize in transferring data between patients and physicians. They also should ensure that any infrared information is encrypted when being transferred. It is evident that the cybersecurity of networked medical devices is the responsibility of many different stakeholders. When cybersecurity improvement measures are taken in the vulnerable parts of the ecosystem, articulating the value of these efforts is done visually in the blueprint. This type of visual representation of the security value dimension allows stakeholders and end customers to see a manufacturer's comprehensive efforts and highlights the added value and differentiation from competitors. The cybersecurity mitigations have been added to an amended cybersecurity blueprint in Figure 2. The risks that were formerly yellow (mitigable) have been shifted to green (acceptable) following the mitigations that were applied. Contribution The key contribution of this article is to extend the value blueprint approach to address the additional value dimension of cybersecurity, in order to articulate cybersecurity value as a way for medical device companies to differentiate in the marketplace. George Tanev, Peyo Tzolov, and Rollins Apiafi The introduction of a cybersecurity value blueprint is important for the following four reasons: 1. It helps in identifying the key stakeholders in the ecosystem together with all associated cybersecurity vulnerabilities. 2. It helps in creating a prioritized plan to address the highest-risk cybersecurity vulnerabilities in collaboration with the rest of the stakeholders. 3. It articulates the value dimensions associated with the security efforts of all relevant stakeholders. 4. It enables innovating the ecosystem through the definition of a clear action plan for improving the security of medical devices over time in a way that could be articulated to business stakeholders and end customers. This type of approach can change the way security is perceived to become a market differentiator built-in from the onset of design, instead of an add-on at the last stages of the development process. For future contributions, the method for analyzing the cybersecurity risks within the ecosystem can be explored further. In this work, the emphasis was on establishing the principles for the cybersecurity value blueprint instead of the specific risk analysis, which requires a deeper insight into the various technological platforms enabling the operation of the device. It is clear, however, that the risk analysis within the ecosystem needs to focus on risks associated with the safety, privacy, and security of all stakeholders in the ecosystem. A potential future work could be to adapt a risk analysis method that incorporates cyber-resilience, lifecycle, and utility attributes in the context of networked medical devices and the ecosystem that is identified through the cybersecurity blueprint. Conclusion The concern regarding cybersecurity in the increasing number of networked medical devices is growing. Manufacturers have yet to effectively convert their cybersecurity efforts into a market driver and market differentiator. This work argues that not positioning these efforts as a market value and differentiator is a missed opportunity that can be taken advantage of by looking at cybersecurity through an ecosystem perspective rather than a product-centric perspective. The suggested cybersecurity value blueprint approach offers the opportunity to enhance both the "resonating focus" and "points of difference" approach to the articulation of a value proposition by including the cybersecurity value dimension ). An explicit articulation of cybersecurity provides manufacturers with a tool for localizing and mitigating cybersecurity risks in the ecosystem, and presenting their efforts in a visual blueprint where the value and differentiation can be clearly seen. In an industry where security is beginning to take a central role, and where competition is fierce, the cybersecurity value blueprint could be a tool that would better position manufacturers in the market. Finally, it should be pointed out that, although the suggested tool should be considered as part of a more general risk management approach, it requires deep knowledge of the technological platforms and the specific business process implementation of all involved stakeholders. This is just another illustration of the fact that medical cybersecurity is truly a value cocreation problem that opens new opportunities for technology entrepreneurs and innovation management scholars and practitioners, which should be addressed through the coordinated activities of the entire business ecosystem within a systematic value chain resilience perspective (Boyes, 2015). Introduction Cybersecurity attacks on information technology (IT) systems are becoming increasingly frequent and sophisticated (Bailey et al., 2014). Critical infrastructuresthe assets essential for the functioning of a society and economy (Public Safety Canada, 2009) such as power generation and distribution, transportation systems, healthcare services, and financial systems -are increasingly reliant on networked IT systems (Rahman et al., 2011;Xiao-Juan & Li-Zhen, 2010). Securing these interconnected IT systems from cyber-attack is thus of grow-ing concern to many stakeholders (Merkow & Raghavan, 2012). Security experts argue that security should be "designed in" to critical systems upfront, rather than retrofitted later McGraw, 2006;Pfleeger et al., 2015). Cybersecurity capability maturity models (e.g., Caralli et al., 2010;NIST, 2014; U.S. Department of Energy, 2014) are one approach used by organizations to assess capability to defend against cyberattacks, benchmark cybersecurity capability against others, and identify cybersecurity capabilities to improve (Miron & Muita, Many systems that comprise our critical infrastructures -including electricity, transportation, healthcare, and financial systems -are designed and deployed as information technology (IT) projects using project management practices. IT projects provide a one-time opportunity to securely "design in" cybersecurity to the IT components of critical infrastructures. The project management maturity models used by organizations today to assess the quality and rigour of IT project management practices do not explicitly consider cybersecurity. This article makes three contributions to address this gap. First, it develops the argument that cybersecurity can and should be a concern of IT project managers and assessed in the same way as other project management capabilities. Second, it examines three widely used cybersecurity maturity models -i) the National Institute of Science and Technology (NIST) framework for improving critical infrastructure cybersecurity, ii) the United States Department of Energy's Cybersecurity Capability Maturity Model (C2M2), and iii) the CERT Resilience Management Model (CERT RMM) from the Carnegie Mellon Software Engineering Institute -to identify six cybersecurity themes that are salient to IT project management. Third, it proposes a set of cybersecurity extensions to PjM3, a widely-deployed project management maturity model. The extensions take the form of a five-level cybersecurity capability perspective that augments the seven standard perspectives of the PjM3 by explicitly assessing project management capabilities that impact the six themes where IT project management and cybersecurity intersect. This article will be relevant to IT project managers, the top management teams of organizations that design and deploy IT systems for critical infrastructures, and managers at organizations that provide and maintain critical infrastructures. The challenge in the digital economy is that no chain is stronger than its weakest link. Christian Wernberg-Tougaard Global Lead for Social Welfare & Human Services at Oracle Corporation " " www.timreview.ca 2014). Like the maturity models in other specialized domains, cybersecurity capability maturity models help organizations to measure their current processes against established industry standards. However, current cybersecurity capability maturity models overwhelmingly focus on evaluating how organizations protect existing systems (i.e., processes to maintain cybersecurity) rather than evaluating how organizations securely develop and deploy new secure information systems (i.e., processes to create cybersecurity). New IT systems are typically developed and deployed as IT projects (Phillips, 2010), which are managed using project management practices (PMI, 2013a). IT projects provide a one-time opportunity to "design in" cybersecurity to the new IT systems deployed within critical infrastructures. Although the project management domain has its own maturity models (e.g., Sowden et al. 2013;PMI, 2013b), the project management models in use today do not explicitly address cybersecurity. For providers of critical infrastructures and their stakeholders, this is both a gap and an opportunity. This article makes three contributions to the theory and practice of securing critical infrastructures. First, it develops the argument that cybersecurity can and should be a concern of the IT project managers and project sponsors of critical infrastructure IT projects, and that project management maturity models could be extended to assess cybersecurity capability in the same way that these models assess other capability domains. Second, it identifies six cybersecurity themes that are salient to IT project management. It accomplishes this by selecting three cybersecurity capability maturity models, examining the content and areas of commonality, and identifying those aspects that overlap with the scope of IT project management or are likely to be impacted by project management decisions and activities. The themes therefore reflect both building secure systems and also building systems in secure way. The three models examined are: i) the National Institute of Science and Technology (NIST) framework for improving critical infrastructure cybersecurity, ii) the United States Department of Energy's Cybersecurity Capability Maturity Model (C2M2), and iii) the CERT Resilience Management Model (CERT RMM). Third, it selects a project management maturity model -the PjM3 -and proposes a new five-level cybersecurity capability perspective that augments the seven capability perspectives of the standard model. Bringing together cybersecurity capability maturity models and the PjM3 project management maturity model provides critical infrastructure organizations with the means to evaluate capability in upstream "cybersecurity creation". This approach will be especially useful for organizations that highly value security and concurrently employ cybersecurity capability maturity models to evaluate capability in downstream "cybersecurity maintenance". The body of this article is structured as four sections. The next three sections each develop one of the article's three contributions and the fourth section concludes. Securing the IT Project IT systems within critical infrastructures typically originate as IT projects (Phillips, 2010). Unlike operations, which are continuous and on-going, projects have a specific set of objectives and well-defined and finite time boundaries (Kerzner, 2013). IT development and deployment activities are typically managed using project management tools and techniques, such as those of the Project Management Body of Knowledge (PMBOK; PMI, 2013a), and an IT project management process with well-defined stages and gates between stages (Phillips, 2010). Decisions and activities within an IT project are likely to have a lasting impact on cybersecurity. Procurement and supply chain management are one example. Outsourced design services, purchase of commercial offthe-shelf (COTS) software, and the adoption of open source software components are all potential sources of vulnerabilities that are difficult to detect and correct later (Ellison et al., 2010). Quality management is a second example. Defects in design, deployment, or provisioning during the IT project could be exploitable until detected and corrected -potentially throughout the active lifecycle of the IT system. The security of the project office and the project infrastructure is also of lasting impact. The tools and processes used for project work, document management, and communication within the project team are all components of information security and integrity. For example, project artifacts thought to be private could be a goldmine to attackers for future social engineering attacks. Thus, IT projects provide a one-time opportunity to securely "design in" cybersecurity to the new IT systems deployed within critical infrastructures. Capability maturity models approach an activity as a process and formally compare the characteristics of the process in use against the characteristics of an "ideal" process (Humphrey, 1988). This approach originated in software engineering and has been widely applied in many specialized domains, including cybersecurity , capacity to leverage open source software (Carbone, 2007), and enterprise-readiness of open source software projects (Golden, 2008). Project management maturity models are the subset of capability maturity models that focus specifically on project management capabilities. A body of empirical evidence associates the use of project management standards, processes, and maturity models with positive project outcomes (Brookes, 2009;Milosevic & Patanakul, 2005). The two most developed and widely deployed project management maturity models are: Both of these models and their various derivatives address the management of project risks, but none explicitly address cybersecurity. Nonetheless, cybersecurity capability could be assessed at the same time and in the same way as other areas of concern within the scope of project management. The remainder of this article focuses exclusively on the PjM3 project management capability maturity model. There are three reasons for selecting the PjM3 rather than a different model. First, the PjM3 is the most widely used model internationally (Young et al., 2011). Second, the PjM3 provides a discrete five-level score in seven perspectives (Sowden et al., 2013); discrete and modular models are more easily extensible for our purposes than, for example, the continuous scores of the OPM3. Third, the PjM3 is not explicitly connected with any particular project management framework or process (Sowden et al., 2013); it is thus more widely applicable than specialized models such as PRINCE2. Nonetheless, much of what follows about the PjM3 could be readily adapted to other project management models by repeating the steps described here. The PjM3 is the project management component of the P3M3 -a broader maturity model that also addresses portfolio management and program management. The P3M3 was developed in 2006 by the Office of Government Commerce in the United Kingdom (OGC, 2006) and was most recently updated in 2013 by Axelos, a private-public partnership with the United Kingdom government (Sowden et al., 2013). It originated as an enhancement to OGC's Project Management Maturity Model, which had been adapted from the original Capability Maturity Model (CMM) developed by the Software Engineering Institute (SEI) in the United States (Humphrey, 1988). P3M3 has been adopted in both government and private organizations. For example, the Australian Department of Finance and Deregulation mandated P3M3 as the common methodology to evaluate Australian government agencies and assess their organizational capability to commission, manage, and realize benefits from ICT-enabled investments (Young et al., 2011). The PjM3 assesses capability within seven process perspectives (Sowden et al., 2013): i) management control, ii) benefits management, iii) financial management, iv) stakeholder engagement, v) risk management, vi) organizational governance, and vii) resource management. Similar to other process maturity models, each perspective is independently assessed at one of five levels: awareness of process (level 1), repeatable process (level 2), defined process (level 3), managed process (level 4), and optimized process (level 5). Each level and each process perspective has embedded attributes. Generic attributes relate to all process perspectives at a maturity level. Specific attributes relate only to a particular process perspective. Thus the PjM3 is potentially extensible with new perspectives that employ the same structure and five-level measurement scale, and provide specific attributes for each maturity level. Cybersecurity Capabilities There is an extensive body of prior work on cybersecurity and on critical infrastructure that can inform a cybersecurity perspective on IT project management. previously identified nine published cybersecurity capability maturity models for critical infrastructures. These nine models were published by five different organizations, with a variety of stated purposes. We employed the following steps to select in five areas: i) maturity and stability of authoring organizations; ii) experience in maturity modelling of authoring organizations; iii) the accessibility of detailed documentation; iv) publishing in the public domain or under open licenses; v) sufficient prescription of framework. Second, we employed three selection criteria: i) high scores in the five areas, ii) no more than one model from any one publisher, and iii) where two models received similar scores, we favoured the more general model or base model over a specialized or derivative model. This selection process was intended to select on both quality and diversity. The following three cybersecurity capability maturity models were selected for further analysis: C2M2 is structured as ten domains, each comprising a set of cybersecurity practices -the activities that an organization can perform to establish and grow capability in the domain. 2. The NIST Cybersecurity Framework from the National Institute of Science and Technology (NIST, 2014). The NIST Cybersecurity Framework was developed in response to a February 2013 executive order from the United States President to "enhance the security and resilience of the Nation's critical infrastructure and to maintain a cyber environment that encouraged efficiency, innovation, and economic prosperity" (The President, 2013). It identifies a set of general principles and best practices to guide organizations to develop their own individual readiness profiles. 3. The CERT Resilience Management Model (CERT-RMM) from the Software Engineering Institute (SEI) at Carnegie Mellon University (Caralli et al. 2010). CERT-RMM was the first security model to adopt a capability maturity perspective. Beginning with the first drafts circulated in 2008, and now at version 1.1 (2010), the CERT-RMM was developed as the foundation for a process improvement approach to operational resilience management. It identifies organizational practices necessary to manage operational resilience and to respond to stress with mature and predictable performance. Table 1 provides a summary of the content and main concerns of each of the three cybersecurity models. There are commonalities among all three models, concerns that are prominent in two of the three models, and unique concerns that are found in one model only. Next, we systematically identified the cybersecurity concerns from Table 1 that are most salient to IT project management. We eliminated concerns that we deemed as purely operational and retained those concerns that either i) overlap with the scope of IT project management or ii) are likely to be impacted by project management decisions and activities. Finally, we grouped the remaining concerns into broad thematic areas, identifying six project-applicable cybersecurity themes: 1. Project environment security 2. Workforce security knowledge 3. Business continuity planning 4. Secure project supply chain 5. Project deliverable security 6. Project deliverable resiliency These six themes provide a potential basis for a cybersecurity perspective on project management capability maturity. Cybersecurity Extensions to the PjM3 To identify the specific attributes of a PjM3 cybersecurity perspective, we re-interpreted the six themes at each of the five levels of generic process-maturity attributes. By employing the same structure and measurement scale, we ensure that the new cybersecurity perspective is fully compatible with the seven standard perspectives of the PjM3, and can be assessed at the same time and in the same way as the standard perspectives. www.timreview.ca Secure by Design: Cybersecurity Extensions to Project Management Maturity Models for Critical Infrastructure Projects Jay Payette, Esther Anegbe, Erika Caceres, and Steven Muegge Level 2: Repeatable 1. Some team members have cybersecurity skills, but they are applied inconsistently throughout the team. 2. Project documentation is created, but there are no processes to maintain or control project documents or code. 3. Each project is responsible for ensuring appropriate identity and access management of project system environments. 4. Cybersecurity requirements are developed in an inconsistent and ad hoc manner. 6. Secure software development practices (e.g., code scans, penetration testing, OWASP) are employed in an inconsistent manner across projects. 7. Business Continuity Plans are inconsistently employed by projects and rarely maintained. Level 3: Defined 1. Cybersecurity skills are included in the job descriptions of key design, development, and testing roles. 2. Security screening of project resources is performed. 3. Project documentation and code is actively maintained in a secure repository. 4. A project role is identified as responsible for the cybersecurity of project deliverable(s). 5. There are defined processes for access and identity control of all system environments used by the project team. 6. Enterprise cybersecurity requirements are defined at the organizational level and are mandatory for all IT projects. 7. Checklists containing the details of all project cybersecurity processes (i.e., SoS, PIA, TRA, etc.) are available to all project team members. 8. Project standards for secure software development are defined and available to all team members. 9. Project standards for secure management of documentation and code exist and are available to all project team members. 10. Corporate procurement processes are employed by projects and all transactions are auditable. 11. Business Continuity Plan templates are made available to all project team members. Level 4: Managed 1. Key design, development, and testing resources hold verifiable cybersecurity skills credentials. 2. Access and identity management configurations of project systems environments are consistently audited to ensure environment security and integrity. 3. All requirements documents are reviewed by an enterprise cybersecurity architect. 4. Phase containment exists to ensure that all project cybersecurity processes and standards (i.e., SoS, PIA, TRA, secure software development, Business Continuity Plans, etc.) are appropriately employed by each project and are of appropriate quality. 5. Projects only use qualified vendors who are, among other things, evaluated for security risk. 7. An enterprise security architect is required to sign-off on all major project deliverables. 8. Project documentation and code are maintained in a secure repository with strict version control. 9. All project documentation and code artifacts have only one copy, which is maintained in a secure repository. 10. Qualified vendors are continuously evaluated for security risk. The cybersecurity perspective on project management capability maturity demonstrates the potential relationship between IT project management and cybersecurity of critical infrastructures. Much of the existing work on securing critical infrastructures, including the various cybersecurity maturity models, has emphasized ongoing operations. However, we suggest that an emphasis on operations addresses only half of the cybersecurity challenge, and we argue that the IT projects that design and deploy new IT systems also require attention. Cybersecurity extensions to project management maturity models -such as the PjM3 cybersecurity perspective proposed above -address the introduction of new systems in a way that will be familiar to experienced project managers and project sponsors. Conclusion As cybersecurity becomes an increasing area of concern for critical infrastructure providers, governments, and private enterprise, it warrants greater attention from IT project managers, project management offices, and project sponsors. We have argued that IT projects provide an opportunity to securely "design in" cybersecurity to the information systems components of critical infrastructures; thus, cybersecurity can and should be a main concern of IT project managers. A cybersecurity perspective on project management maturity addresses this opportunity in a form that is familiar to project practitioners. Although this work is presented here at an early stage and has not yet been proven in the field, we sincerely hope that it sparks a dialogue between IT project practitioners, cybersecurity professionals, and providers of critical infrastructures on how to more effectively secure the systems that are essential for the functioning of our society and our economy. Successful implementation will require action by multiple groups. We call upon IT project managers and project staff to try out these ideas in the field -beginning with informal self-assessments of cybersecurity maturity and followed by action plans to raise scores -and then to report back on their experiences. We call upon critical infrastructure project sponsors to provide IT project managers and project teams with the authority, incentives, training, and resources to "design in" cybersecurity to IT projects and assess the maturity of those efforts. We call upon researchers to empirically test the efficacy of these ideas, particularly the relationships between IT project cybersecurity attributes and highimpact outcomes, including traditional project outcomes, security outcomes, and operational outcomes. If evidence from the field shows this approach to be effective, adoption on a larger scale will require actions from project management organizations to incorporate cybersecurity more formally into the Pj3M and other project management standards. This formalization would open up new revenue opportunities for providers of training services, for providers of certification and assessment services, and for providers of project tools and infrastructure, and it would accelerate the careers of qualified project professions who are capable of operating at a high maturity score on the cybersecurity perspective. www.timreview.ca About the Authors Jay Payette is a graduate student in the Masters of Design program at Carleton University in Ottawa, Canada, and is the Managing Principal of Payette Consulting. Jay founded Payette Consulting in 2011 to help clients balance the consistent results of repeatable business processes and analytic decision making, with the fuzzy world of creativity. His research has focused on applying design-thinking principles to business model generation, strategy, and project delivery. Prior to founding Payette Consulting, Jay worked for the Canadian consulting practice of Accenture and as an independent IT Project Manager. Esther Anegbe is a graduate student in the Technology Innovation Management (TIM) program at Carleton University in Ottawa, Canada. She also holds a Bachelor's degree in Computer Engineering from Ladoke Akintola University of Technology in Nigeria. She worked as a Technology Analyst with a leading Investment Management Firm in Lagos, Nigeria (Sankore Global Investments), where she formed part of the technology team that developed, deployed, and provided support for the financial software projects that expanded the market reach of the firm's stock brokerage and wealth management subsidiaries. She is currently working on a startup (Tech Wits) to provide enterprise solutions and services to startups in their accelerators and incubators. Erika Caceres is a graduate student in the Technology Innovation Management (TIM) program at Carleton University in Ottawa, Canada. She holds a Bachelor's degree in Technology Information Management from The University of Yucatan, Mexico. She previous worked as an innovation consultant at I+D+i Hub, a leading technology transfer office in Merida, Mexico, where she formed part of the management team to produce innovation projects that were submitted for funding to the government to help accelerate the economy in the south of Mexico. She is currently working on Volunteer Safe, an online startup that pre-screens and licenses volunteers and connects them to volunteer opportunities aligned to their profile. Steven Muegge is an Assistant Professor at the Sprott School of Business at Carleton University in Ottawa, Canada, where he teaches and leads a research program within Carleton's Technology Innovation Management (TIM) program. His research, teaching, and community service interests include technology entrepreneurship and commercialization, non-traditional settings for innovation and entrepreneurship (business ecosystems, communities, platforms, and interconnected systems that combine these elements), and business models of technology entrepreneurs (especially in non-traditional settings). www.timreview.ca Olukayode Adegboyega Introduction A botnet is a network of infected hosts that carry out commands sent by a botmaster. The impacts of botnetenabled cyber-attacks on individuals and organizations are diverse and have necessitated a collaborative approach that leverages technical and non-technical systems to mitigate botnet-enabled cyber-attacks. However, such collaborative initiatives carried out to solve botnet-related problems are costly, complex, and time consuming due to poor communication among the executives and personnel in technical, legal, security, and research functions of heterogeneous organizations, including law enforcement agencies. Although many collaborative initiatives have been successful, some have not (Lerner, 2014;Schmidt, 2012). This article provides a representation for executing and resisting botnet-enabled cyber-attacks and botnet takedowns. The intent is to improve communications, learning, and decision making among the various actors that need to come together to effectively and efficiently address botnet-related problems, accelerate theory development, and clarify the discussion about the "best-case" scenarios for the future of the online world. In this representation, the initiatives to execute and resist botnet-enabled cyber-attacks and botnet takedowns are conceptualized as collective actions carried out by Internet-linked clubs. Collective action refers to actions undertaken for a collective purpose, such as the advancement of a particular ideology or idea, or the polit-A model for executing and resisting botnet-enabled cyber-attacks and botnet takedowns does not exist. The lack of this representation results in ineffective and inefficient organizational decision making and learning, hampers theory development, and obfuscates the discourse about the "best-case" scenarios for the future of the online world. In this article, a club theory model for botnet-enabled cyber-attacks and botnet takedowns is developed. Initiatives to execute and resist botnet-enabled cyber-attacks and botnet takedowns are conceptualized as collective actions carried out by individuals and groups organized into four types of Internet-linked clubs: Attacker, Defender, Botbeheader, and Botmaster. Five scenarios of botnet-enabled cyber-attacks and five scenarios of botnet takedowns are examined to identify the specific dimensions of the three constructs and provide examples of the values in each dimension. The developed theory provides insights into the clubs, thereby paving the way for more effective botnet mitigation strategies. This research will be of particular interest to executives and functional personnel of heterogeneous organizations who are interested in improving the quality of their communications and accelerating decision making when solving botnet-related problems. Researchers applying club theory to examine collective actions of organizations linked by the Internet will also be interested in this research. Although club theory has been applied to solve problems in many fields, this is the first effort to apply it to botnet-related problems. I don't want to belong to any club that will accept me as a member. Groucho Marx (1890-1977 Comedian, actor, and host " " www.timreview.ca Representing Botnet-Enabled Cyber-Attacks and Botnet Takedowns Using Club Theory Olukayode Adegboyega ical struggle with another group (Postmes & Brunsting, 2002). Collective action requires a definition of who "we" are and an understanding of what "we" can do (Drury et al., 2014). Botnet-enabled cyber-attacks executed by groups such as Wonderland, Anonymous, Drink or Die, The Ukranian ZeuS, Dark Market, Operation Olympic Games, Ghost Net, and PLA Unit 61398 provide examples of collective actions of Internet-linked groups. Membership of such groups is comprised of both willing and unwilling members whose devices were compromised without their consent (Grabosky, 2014). Other examples of collective action include initiatives to takedown botnets. In 2009, organizations including Defence Intelligence, Panda Security, Neustar, Directi, Georgia Tech Information Security Center, and security researchers came together to form the Mariposa Working Group for the purpose of taking down the Mariposa botnet (Sully & Thompson, 2010). In 2013, Symantec and Microsoft collaborated to obtain a court injunction to dismantle the ZeroAccess botnet (Whitehouse, 2014). In 2014, a group of more than 30 organizations comprised of law enforcement agencies, the security industry, academia, researchers, and service providers cooperated to takedown the GameOver Zeus botnet (Whitehouse, 2014). The group identified the criminal elements and technical infrastructure, developed tools, and crafted messages for users. However, little is known about the inner workings of the collective actions of such groups. By inner working, the author means the arrangement employed by the groups to carry out their activities (e.g., to recruit members or to distribute technical and non-technical infrastructures among members). Club theory has proven useful in examining the inner workings of collective action in private and public settings (Crosson et al., 2004;Medin et al., 2010). Extant literature on the applications of club theory has focused on non-Internet applications. Club theory has been applied to solve problems related to: highway congestion, highway pricing, provisioning, and financing (Bergias & Pines, 1981;Glazer et al., 1997); grid services (Shi et al., 2006); and the simultaneous deepening and enlargement of the European Union (Ahrens et al., 2005;Thiedig & Sylvander 2000). A few Internet-related problems such as those related to self-organizing peer-to-peer networks have been solved by the club theory (Asvanund et al., 2004). Ray-mond (2013) suggested that the Internet can be considered as a set of "nested clubs", and Hofmokl (2010) suggested that Internet goods such as broadband Internet access, proprietary software, and closed databases can be categorized as club goods because they are nonrivalrous in consumption and excludable. Club theory has been applied to solve problems in many different fields. However, to the author's knowledge, this is the first application of club theory to solve botnet-related problems. In this article, information on five botnet-enabled cyber-attacks and five botnet takedowns are used to conceptualize four types of Internetlinked clubs. The article identifies the dimensions of three constructs and their values observed in ten scenarios. The remainder of this article is structured as follows. First, the four types of Internet-linked clubs and the three constructs of club theory that anchored the research are described. Then, the method used to carry out the research is explained, and the results are presented. The results include the dimensions of the three constructs for examining the clubs that execute and resist botnet-enabled cyber-attacks and botnet takedowns as well as the characterization of each of the four clubs. The last section provides the conclusions. Types of Internet-linked Clubs Definitions of a club has been offered in line with the scope of the authors and the justifications for club formation such as taste for association, and cost reduction derived from team production. A club has been defined as: i) a group of consumers sharing a common facility (Glazer et al., 1997); ii) a group of persons who share in the consumption of a good which is not purely private, nor wholly divisible among persons (Pauly, 1970); iii) a consumption ownership-membership arrangement justified for its members by the economies of sharing production costs of a desirable good (Buchanan, 1965); and iv) a voluntary group of individuals who derive mutual benefit from sharing one or more of the following: production costs, the members' characteristics, or a good characterized by excludable benefits (Cornes & Sandler, 1996). These definitions indicate that a club is a group that shares a good. A club good has been defined as a good produced and consumed by a group of individuals, whose consumption unit is greater than one but less than infinity (Pauly, 1970); goods that are partially rivalrous and exwww.timreview.ca Representing Botnet-Enabled Cyber-Attacks and Botnet Takedowns Using Club Theory Olukayode Adegboyega cludable (Sandler & Tschirhart, 1980); resources from which outsiders can be excluded, for which "the optimal sharing group is more than one person or family but smaller than an infinitely large number" (Strahilevitz, 2006); and goods whose benefits and costs of provision are shared between members of a given sharing arrangement or association (Buchanan, 1965). A club good has two major characteristics: i) partially rivalrous and ii) excludability. A good is partially rivalrous in consumption when one person's consumption of a unit of the good detracts, to some extent, from the consumption opportunities of another person (Sandler & Tschirhart, 1980). A key feature of the good shared by a club is that it is possible to prevent individuals who have not paid for the good from having access to it. Examples of club goods include hospitals, health clubs, trauma clinics, libraries, universities, movie theatres, telephone systems, and public transport (Sandler & Tschirhart, 1997). According to club theory, members of a heterogeneous population partition themselves into a set of clubs that best suits their taste for association (Schelling, 1969) and cost reduction derived from team production (McGuire, 1972). Therefore, the individuals and organizations that execute and resist botnet-enabled cyberattacks and botnet takedowns can be thought of as partitioning themselves into many Internet-linked clubs, each comprised of a group who derive mutual benefits from sharing a good. By "execute" the author means the imposition of rights that were not intended by owners of computer systems, assets, data, and capabilities. By "resist", the author means the enforcement of rights that were intended by owners of computer systems, assets, data, and capabilities. A company such as Microsoft, a law enforcement agency such as the Federal Bureau of Investigations, or a nation state such as China can be members of various clubs, and these clubs can be of different types. Table 1 shows that the Internet-linked clubs that execute and resist botnet-enabled cyber-attacks and botnet takedowns can be organized into four types based on the nature of the good that members share. Clubs whose members share a botnet belong to Type 1 (Attacker). Clubs whose members share a socio-technical system belong to Type 2 (Defender). Clubs whose members share a botnet termination method to takedown a botnet belong to Type 3 (Botbeheader). Clubs whose members share a command-and-control server network belong to Type 4 (Botmaster). Type 1: Attacker Members of an Attacker club share a botnet to compromise or gain unauthorized access to an institution's systems and technology (Gallagher et al., 2014). As introduced earlier, a botnet is a network of bot-infected hosts that carry out commands sent by a botmaster, typically unbeknownst to the owners of the hosts (Yahyazadeh & Abadi, 2015). Botnets are used to carry out cyber-attacks that can cause devastating effects to individuals, organizations, and nation states. Botnet-enabled cyber-attacks are considered one of the most prevalent and dangerous threats to connected devices on the Internet today. These attacks leverage several thousands of compromised hosts and use complex network structures which are quite difficult to detect, trace and takedown (APEC, 2008;Czosseck et al., 2011;Lerner, 2014). Such malicious activities include distributed denial-of-service attacks (DDoS); Simple Mail Transfer Protocol (SMTP) mail relays for spam; adclick fraud; and the theft of application serial numbers, login IDs, and financial information such as credit card numbers and bank accounts (Cremonini & Riccardi, 2009;Khattak et al., 2014;Li et al., 2009). Type 2: Defender Members of a Defender club share a socio-technical system to detect or counteract the effects of botnet-en- The literature on how to defend against botnet-enabled cyber-attacks highlights the importance of leveraging the diverse skill sets and legal mechanisms available to corporate entities and law enforcement in the form of public-private partnership. For example, the North Atlantic Treaty Organization's (NATO) new cyber-defence policy considers cyber-attacks that threaten any member of the alliance as an attack on all which may provoke collective defense from the alliance's 28 members (Cheng, 2014). In 2000, the defence against cyber-attacks on Estonia was successfully carried out by a working group comprised of the ICT security community, banks, legal authorities, Internet service providers, telcommunication companies, and energy companies (Schmidt, 2012). Type 3: Botbeheader Members of a Botbeheader club share a method to terminate a botnet -a particular procedure used to identify and disrupt the botnet's command-and-control infrastructure (Dittrich, 2012;Nadji et al., 2013). Typically, this termination method embodies a legal regime (i.e., a system of principles and rules created by international or domestic law) and is denoted by words such as "behead", "takedown", "takeover", or "eradication" (Dittrich, 2012;Lerner, 2014;Nadji et al., 2013;Sully & Thompson, 2010). In recent years, governments, not-for-profit organizations, and companies have launched aggressive attacks to disrupt and disable botnets. The techniques used to takedown botnets are as varied as the botnets themselves. Many of the botnet takedown initiatives employ the use of the court system to obtain injunctions to initiate a takedown (Shirazi, 2015). Type 4: Botmaster Members of a Botmaster club share one or more command and control servers and a communications network for a particular botnet. These members are called "botmasters". The botmasters leverage the large network of infected machines, vast underground economy, and forums on the Internet (made possible by the anonymity provided by the Internet) to operate illicit businesses such as false advertising of cheap pharmaceutical drugs, malware distribution, performing a variety of scams, and sending spam emails on behalf of third-party customers (Stone-Gross et al., 2011). Club Theory Constructs Club theory is concerned with how groups (clubs) form to provide themselves with goods that are available to their membership, but from which others (non-members) can be excluded. In short, the club theory accommodates the fact that some goods can be simultaneously available to a defined and finite population and subject to explicit exclusion (Crosson et al., 2004). A construct refers to a single theoretical concept that represents one or several dimensions. Club theory builds on three constructs: i) optimal size of products, ii) optimal membership size, and iii) sharing arrangements. Size is a central characteristic of organizations that is typically measured by the number of employees, members, or total revenues. Sandler and Tschirhart (1980) explain that the optimal size of a product depends positively on its provision level. The greater the value of provision level, the greater the size or number of goods available for consumption. The optimal size of a club is the size at which members derive maximum benefits from the consumption of the shared resource. The sharing arrangements may or may not call for equal consumption on the part of each member, and the peculiar manner of sharing will clearly affect the ways in which the variable enters the utility function. This means that the provisional decisions of the good are based on the contribution of the club members: members who contribute more enjoy a larger share of the club goods (Buchanan, 1965). Method The objective of this article is to develop a model for representing botnet-enabled cyber-attacks and botnet takedowns initiatives in terms of the dimensions of the three constructs used in club theory to explain collective action. The model provides insights into the clubs, thereby paving the way for more effective botnet mitigation strategies. To identify the dimensions that can be used to measure the club theory's three constructs and provide examples of the values for each dimension, an interpretative approach to content analysis was used. www.timreview.ca Olukayode Adegboyega The author's interpretation of the results was based on the conceptualization of the four types of Internetlinked club and the three constructs of club theory described above. A sample comprising 10 scenarios, five for botnet-enabled cyber-attacks and five for botnet takedowns, was selected and the author collected information from the Internet for each of the scenarios in the sample. Three spreadsheets, one for each construct, were prepared. Each spreadsheet captured the potential dimensions and values collected for the 10 scenarios in the sample. Each scenario had two Internet-linked clubs. Five scenarios focused on botnet-enabled cyber-attacks and included information on two rival Internetlinked clubs, the Attacker and Defender. The five other scenarios focused on botnet takedowns and included information on two rival clubs, the Botbeheader and Botmaster. The interpretative approach of content analysis was used to identify the sets of dimensions for each construct. A final set of dimensions considered to be essential to a unified representation of botnet-enabled cyber-attacks and botnet takedowns was identified by eliminating ambiguities and inconsistencies. For each dimension, values for each scenario were identified. Finally, these values were used to compare the four types of Internet-linked clubs. Representation for Executing and Resisting Botnet-Enabled Cyber-Attacks Figure 1 illustrates a unified representation for executing and resisting botnet-enabled cyber-attacks and botnet takedowns. This representation identifies the eight dimensions that can be used to measure the three constructs from club theory for all four Internetlinked club types. Olukayode Adegboyega Membership size construct The construct "Membership size" has two dimensions: minimum number and diversity. "Minimum number" can be measured as: minimum number of individuals and minimum number of organizations. Minimum number of individuals refers to the fewest possible people responsible for executing or resisting cyber-attacks. Minimum number of organizations refers to the fewest possible organization responsible for executing or resisting cyber-attacks. The principle of minimum number was defined by White (1952) and has been used in forensic anthropology and other disciplines. The dimension "Diversity" is a measure of the uniqueness of the entities responsible for executing or resisting cyber-attacks. There exist at least four diversity types: role diversity (e.g., developer, operator, marketer, and accomplices), organization diversity (e.g., private, academic, and government), sector diversity, and country diversity. Facility size construct In club theory, facility size is determined by the provision level of the shared resource, which is negatively related to the congestion that characterizes a sharing group (Sandler & Tshirhart, 1997). The results of this research suggest that the construct "Facility size" has three dimensions: number of compromised or enduser devices, number of command-and-control servers, and number of downloadable instances of malware or anti-malware. The dimension "Number of devices" refers to the number of devices leveraged to execute or resist cyber-attacks with or without their owners' consent. The dimension "Number of command and control servers" refers to the number of servers used to issue commands to the computers that are part of the botnet and to accept reports back from compromised computers. The dimension "Number of downloadable instances of malware or anti-malware" refers to the number of software applications and resources used to exploit or defend against vulnerabilities in computer systems. Sharing arrangements construct The construct "Sharing arrangements" has three dimensions: arrangements to rent or purchase facility and customized services; arrangements to grow the facility; and arrangements to take order from authority. The dimension "Arrangement to rent or purchase facility and customized services" refers to agreements to derive financial benefits from the use of attack or defence infrastructures. The dimension "Grow the facility" refers to the arrangement to expand infrastructures to execute or resist cyber-attacks. There are at least three means to grow the shared facility: affordable customized products and services, hardware or software capacity upgrade, and network topology that provides control to the owner. The dimensions "Order from authority" refers to the arrangements made with one or more legal authorities to execute or resist botnet-enabled cyber-attacks. Individuals and groups leverage legal frameworks to remain anonymous, takedown botnets, and apprehend and prosecute those who cause botnet-related problems. Table 2 provides the results of examining the information collected for the 10 scenarios, five of which focused on botnet-enabled cyber-attacks and five focused on botnet takedowns. For each club type, Table 2 provides the values of the eight dimensions of the three constructs that were extracted from the information collected from the scenarios. For example, for each of the five scenarios in the Type 1 (Attacker) club, the minimum number of individuals who were known to have carried out attacks were 5, 5, 6, 7, and 62. Therefore, the first cell in Table 2 shows the range 5-62. Similarly, the minimum number of organizations collaborating to resist each of these five botnet-enabled cyber-attacks were: 8, 8, 8, 9 and 10. Therefore, the range shown in the second row of Table 2 is 8-10. These results suggest that a Type 2 (Defender) club has at least eight organizations engaged in resisting botnet-enabled cyber-attacks. Salient Characteristics of Each Club Type The information on the five botnet-enabled cyber-attacks sampled scenarios presented in Table 2 suggests that an Internet-linked Attacker club that fits Club Type 1 (Attacker) is comprised of at least five individuals. Members of this club type assume at least four individual roles to execute cyber-attacks, access millions of compromised devices and downloadable malware programs, use a minimum of one command-and-control server, remain anonymous to evade arrest, use web markets to sell products and services, and grow the facilities members share through access to multiple lowcost customized malware variants. Also, the five botnet-enabled cyber-attacks scenarios examined suggest that a club that fits Club Type 2 (Defender) club comprises at least eight organizations that act to resist a cyber-attack. These organizations operate in different sectors and countries. These organizations establish contractual agreements for product and service sales, grow their facility using hardware and software upgrades, and actively engage with legal authorities. www.timreview.ca Olukayode Adegboyega The information on the five botnet takedowns sampled scenarios in Table 2 suggests that a Type 3 (Botbeheader) club has at least three organizations engaged in a botnet takedown. These organizations are diverse in terms of operations, sectors, and countries, and they use tens of compromised devices and at least three command-and-control servers. Members of this club type engage in legal and contractual agreements for information sharing and grow the shared facilities via research and development as well as learning from observing information available in web markets. The results of the five botnet takedown sampled scenarios shown in Table 2 show that the minimum number of members in a club that fits Type 4 (Botmaster) ranges from one to three. These results suggest that this type of club may exists with only one member. Therefore, not all clubs of this type may embody collective action. Members of a club that fits Club Type 4 (Botmaster) have access to at least 500,000 compromised devices, 600,000 downloadable malware programs, and at least one command-and-control server. These members rely on web markets for products and services sales, grow the shared facility using network topologies designed to make botnet takedown difficult, and remain anonymous to evade arrest. Conclusions This research applies club theory to examine the collective actions of individuals and groups organized for the purpose of executing or resisting botnet-enabled cyber-attacks and botnet takedowns. The representation developed takes the club theory perspective that collective action can best be understood using three constructs: club membership size; size of the facility that club members share; and arrangements to operate, purchase/rent and grow the shared facility. The representation identifies four Internet-linked club types (i.e., Attacker, Defender, Bottbeheader, and Botmaster) and the eight dimensions of the three constructs of club theory. The representation offered is expected to enhance knowledge on the inner working of the collective actions responsible for executing and resisting botnet-enabled cyber-attacks and botnet takedowns and thereby improves communications among individuals working to solve botnet related problems in heterogeneous organizations and expedite theory development. Using club theory enhanced our understanding of the various types of Internet-linked clubs that execute and resist botnet-enabled cyber-attacks and botnet takedowns. At least three issues require further research. First, what are the specific learning-related benefits of sharing a botnet, a socio-technical system, a termination method, or a command-and-control server network? The author was not able to extract learning-related benefits from the information collected for the ten scenarios. Thus, answers to the following research questions should be found: How do clubs of the same type learn from one another? How do clubs of different types learn from one another? The author believes that answer to these questions may provide insight to the understanding of inherent motivation for forming and or joining an Internet-linked type of club. The second area of research entails the study of congestion problems that prevent members of the clubs from deriving maximum benefits from the shared resources. It is surmised that congestion is different across the four club types. For example, congestion in Type 1 (Attacker) clubs may be related more to monetization of products and services in web markets whereas court orders may be causing congestion in Type 3 (Botbeheader) clubs. The third area of research can focus on the study of the likely rivalry that exists within and among the four types of Internet-linked clubs to offer useful conclusions that can be used to address botnet-related problems. TIM Lecture Series Three Collaborations Enabling Cybersecurity Deborah Frincke, Dan Craigen, Ned Nadima, Arthur Low, and Michael Thomas Overview The TIM Lecture Series is hosted by the Technology Innovation Management (TIM; timprogram.ca) program at Carleton University in Ottawa, Canada. The lectures provide a forum to promote the transfer of knowledge between university research to technology company executives and entrepreneurs as well as research and development personnel. Readers are encouraged to share related insights or provide feedback on the presentation or the TIM Lecture Series, including recommendations of future speakers. The third TIM lecture of 2015 was held at Carleton University on May 14th, and was presented by several speakers, each representing different collaborations to enable cybersecurity. In the keynote presentation, Deborah Frincke, Director of Research for the National Security Agency/Central Security Service (www.nsa.gov) in the United States, described the NSA's Research Directorate and its efforts to create breakthroughs in mathematics, science, and engineering that support and enable the wider organization's activities. Next, Dan Craigen, Science Advisor at the Communications Security Establishment in Canada and a Visiting Scholar at the Technology Innovation Management program of Carleton University in Ottawa, Canada, launched the newest title in the "Best of TIM Review" book series (timbooks.ca), which he co-edited along with Ibrahim Gedeon, Chief Technology Officer at TELUS (telus.com). The book features 15 of the best articles on cybersecurity published in the TIM Review, selected and introduced by the co-editors, and with a foreword from Eros Spadotto, Executive Vice President of Technology Strategy at TELUS. Cybersecurity: Best of TIM Review is available for purchase from Amazon (amazon.com/dp/ B00XD3O6L0/) in ebook format for Kindle. All proceeds support the ongoing operation of the TIM Review. Finally, representatives from three companies -Denilson, Crack Semiconductor, and Bedarra Research Labs -described their approaches to collaboration and challenging cybersecurity problems. Part I: An introduction to the Research Directorate of the National Security Agency As Director of Research for the NSA, Frincke leads the only full-spectrum in-house research organization in the United States intelligence community, although its research activities extend beyond the organization through collaborations, linkages, and partnerships with industry, academia, and other government agencies, both within and beyond the United States. The NSA's overall objectives are to: • defend the vital networks of the United States • advance the goals of the United States and its alliances • provide guidance to national decision makers The Research Directorate engages with leading industries, universities, and national laboratories to both advance core competencies and to leverage work in Cybersecurity is a huge global issue. And no one organization can solve these problems by itself. We need collaborative approaches. We need to partner. We need ecosystems. We need to bring together our very best. And it's going to take time. " " www.timreview.ca TIM is a unique Master's program for innovative engineers that focuses on creating wealth at the early stages of company or opportunity life cycles. It is offered by Carleton University's Institute for Technology Entrepreneurship and Commercialization. The program provides benefits to aspiring entrepreneurs, employees seeking more senior leadership roles in their companies, and engineers building credentials and expertise for their next career move.
24,550.4
2015-06-26T00:00:00.000
[ "Computer Science", "Engineering" ]
Ultrastable liquid crystalline blue phase from molecular synergistic self-assembly Fabricating functional materials via molecular self-assembly is a promising approach, and precisely controlling the molecular building blocks of nanostructures in the self-assembly process is an essential and challenging task. Blue phase liquid crystals are fascinating self-assembled three-dimensional nanomaterials because of their potential information displays and tuneable photonic applications. However, one of the main obstacles to their applications is their narrow temperature range of a few degrees centigrade, although many prior studies have broadened it to tens via molecular design. In this work, a series of tailored uniaxial rodlike mesogens disfavouring the formation of blue phases are introduced into a blue phase system comprising biaxial dimeric mesogens, a blue phase is observed continuously over a temperature range of 280 °C, and the range remains over 132.0 °C after excluding the frozen glassy state. The findings show that the molecular synergistic self-assembly behavior of biaxial and uniaxial mesogens may play a crucial role in achieving the ultrastable three-dimensional nanostructure of blue phases. B lue phase liquid crystals (BPLCs) have self-assembled threedimensional cubic defect structures originating from the competition between the packing topology and chiral forces 1 . In BPs, mesogenic molecules are known to exhibit a "double-twist" arrangement along the x-and y-axes, and such a unique structure is called a double-twisted cylinder (DTC) [2][3][4] . In the space between the DTCs, the molecular orientation cannot pack continuously, resulting in the formation of energetically disfavoured disclinations within the cubic lattices 3,5 . As a result, BPs usually exist in a narrow temperature range of 0.5-2.0°C between an isotropic (Iso) and a cholesteric (Ch) phase in highly chiral liquid crystals. Three types of BPs, BP III, BP II, and BP I, are observed in sequence upon cooling. BP I and BP II possess body-centered cubic (BCC) and simple cubic (SC) structures, respectively, whereas BP III is considered amorphous in nature 3 . To obtain a BP with a wide temperature range, various strategies have been proposed, such as introducing polymers 1,6 and nanoparticles 7,8 as well as molecular design [9][10][11][12][13][14][15][16][17][18] . For example, Kikuchi et al. pioneered a polymer stabilized BP (PSBP) system in which the BPs were stabilized by polymer networks, obtaining a wide temperature range of over 60.0°C (~53.0 to −13.0°C) 1 . This PSBP was applied by the Samsung Co. in 2008 to demonstrate the first BP liquid crystal display prototype 19 , with attractive characteristics 20 such as a submillisecond grey-to-grey response time, color filterand alignment layers-free operation, optically isotropic voltage-off state, and large cell gap tolerance. Subsequently, Castles et al. achieved a BP composite material with a superwide temperature range (from −125 to 125°C observed via polarizing optical microscopy) by injecting an achiral LC mixture with a high clearing point into a polymer template prefabricated from a PSBP in which LC was washed-out 6 . This type of BP templated material has considerable potential in photonics applications, such as mirrorless lasers and switchable electro-optic devices 21 . On the other hand, developing low molecular weight BPLCs, such as those comprising well-designed molecules with special configurations (dimeric molecules 10 , U-shaped molecules 12,15,16 , T-shaped molecules, 11 or bent-core molecules 9,13,14 ), could be another promising pathway to obtaining BPs with a wide temperature range. The landmark work of Coles et al. broadened the temperature range to 44.0°C, from 60.0 to 16.0°C, using liquid crystalline dimers 10 , in which molecular biaxiality and flexoelectricity played a crucial role in the formation of stable DTC structures 5 . Furthermore, Yang et al. reported a hydrogenbonding stabilized BP over a wide temperature range and found that weak interactions from fluorinated molecules were beneficial for the formation of stable cubic nanostructures 17 . However, to our knowledge, BPLCs covering the temperature range from 80.0 to −30.0°C, which is considered the working temperature range of practical LC devices, have not been reported in low molecular weight LC systems to date. In this work, we demonstrate that an ultrastable BP system can be achieved by engineering the molecular synergistic self-assembly of tailored biaxial dimeric and uniaxial rodlike mesogens. Results Molecular design and materials combination. The chemical structures of the mesogens used in this work are shown in Fig. 1. A series of biaxial dimers with two rigid mesogenic units linked by a flexible chain were synthesized (Fig. 1a). The dimeric mixtures were prepared by mixing the dimers with biphenyl mesogenic units (BPFOn) 10 and those with triphenyl mesogenic units (TPFOn). Moreover, a series of rodlike LCs (TTPFs) comprising a terphenyl rigid core and two flexible chains with variable lengths were also achieved (Fig. 1b). Several analogs of TTPF for specific experimental requirements are listed in Fig. 1c-e. Two chiral dopants with high helical twisting power (HTP) values, BDH1281 and R5011 ( Supplementary Fig. S33), were used to offer high chiral forces for forming the DTC structures. The BP temperature ranges of all the samples capillary filled into 20.0 μm-thick cells without alignment treatment were determined using polarizing optical microscopy (POM) with a cooling rate of 0.5°C/min and corrected via differential scanning calorimetry (DSC) and the temperature dependence of the relative dielectric constant (ε r -T). The results are listed in Table 1. Phase parameters and performance of the material systems. As listed in Table 1, BP is absent in Sample 1, which is composed of 94.0 wt% TTPF, 2.0 wt% R5011, and 4.0 wt% BDH1281, because the rod-shaped configuration of TTPF is not beneficial for forming DTC structures 5 . In contrast, BPs with a temperature range of 48.5°C are observed in Sample 2, composed of 66.9 wt% BPFOn, 27.1 wt% TPFOn, 2.0 wt% R5011, and 4.0 wt% BDH1281, in which the biaxial configuration and flexoelectricity of the dimers are beneficial for stabilizing the DTC structures 2,5 . Interestingly, comparing the results of Samples 3-6, the BP temperature range substantially increased from 63.0°C to over 132.4°C upon increasing the concentration of rodlike TTPF in the samples from 9.4 wt% to 32.9 wt%, respectively. Figure 2a illustrates the optical textures of Sample 6 at different temperatures. When the sample is cooled to 92.8°C, BP textures emerge, and they remain present until −193.5°C (a video of the optical texture variation upon cooling is shown in Supplementary Movie S1). The Bragg reflection spectra of the BPs in Sample 6 upon cooling were measured using a fibre optical spectrometer and can always be detected from~91.0°C to −193.5°C (Fig. 2b). The central reflection wavelength of the BPs decreases from approximately 573 nm to 486 nm upon cooling from 91.0°C to 86.0°C, respectively; however, it returns to~512 nm at 80.0°C and then decreases slowly again to 483 nm upon cooling to −193.5°C. The inset in Fig. 2b shows the characteristic reflection spectra of the BP at 91.0, 90.0, 88.0, 86.0, and 80.0°C. An abrupt rebound occurred during the decrease in the central reflection wavelength when Sample 6 was cooled from 88.0°C to 86.0°C and then from 86.0°C to 80.0°C, while the color of BP platelets changed from green to blue and then from blue to green, respectively. A phase transition from BPII to BPI at~86.0°C can be confirmed via the DSC measurement data of Sample 6 from 70.0 to 110.0°C ( Supplementary Fig. S34). Moreover, the wideangle X-ray diffraction (WAXD) spectrum of Sample 6 was measured from 100.0 to −190.0°C (Supplementary Fig. S35) and confirmed that no precipitation of crystals occurred during the cooling procedure in the sample. Sample 6 in a low temperature near a liquid nitrogen temperature still maintained a BP state; generally, it may have been vitrified to a glass state, in which the structure of the BP was simply frozen. To determine the glass-transition temperature (T g ), the thermograms of Samples 3 and 6 were measured via DSC (Fig. 2c). The phase transition temperatures Iso-BP and BP-X* from DSC are in accordance with those from POM, but the glass-transition peak was discovered only in the heating process. To further affirm this T g , the ε r -T curves were obtained via an impedance analyzer (Fig. 2d). A distinct variation at apporoximately −40.0°C is exhibited in the heating and cooling processes, which is in accordance with the T g obtained from DSC. Therefore, it can be concluded that the BP state of Sample 6 was a real thermal equilibrium state from 92.8 to −39.6°C. Subsequently, a continuous increase in TTPF concentration to 37.6 wt % results in abrupt suppression of the range in Sample 7. Thus, an appropriate concentration of TTPF can substantially stabilize the cubic nanostructures of the BPs. Furthermore, large BP single crystals (Fig. 2e, the diameter of the maximum platelet exceeds 1100 μm) can be obtained from this system by only~3 h of thermal treatment upon cooling the sample in a 20.0 μm-thick cell, with no initial surface treatment, from the isotropic phase to 91.5°C at~0.1°C/min, then from 91.5 to 88.0°C at~0.02°C/min and then from 88.0°C to 83.0°C at 0.5°C/min. Highly effective approaches for fabricating a large BP single crystal were presented by developing advanced equipment and excellent professional technologies in previous studies 22,23 . We believe that better performance would be obtained if our material system was combined with these good types of equipment and technologies. Large BP single crystals easily form in this system, which is highly conducive to potential application in photonics fields 21 . In prior work, the relationship between the elastic constant (the values of our system are listed in the Supplementary Information) and molecular configuration was used by researchers to explain the effect of additives, such as bentcore molecules, in widening the BP temperature range. However, the underlying mechanism of the rodlike additive in widening the BP temperature range is very different from that of the bent-core additive. Therefore, a small-angle X-ray scattering (SAXS) technique was further employed to explore this mechanism via Sample 8 comprising DITPF (Fig. 1d), a Iso-BP is the phase transition temperature from an isotropic liquid to a BP. b BP-X* is the phase transition temperature from a BP to an unidentified phase. c ΔT is the temperature range of a BP. which is an iodine-functionalized rodlike mesogenic analog of TTPF. Fig. 2f, and the diffraction data of Sample 8 in this state were obtained. In contrast, no diffraction signal is observed in Sample 8 at the isotropic phase or in Sample 6 containing TTPF without iodine substituents. Figure 3g shows the intensity of scattered X-rays in arbitrary units as a function of the magnitude of the scattering vector q, where the diffraction peaks were q 1 = 0.065, q 2 = 0.075, q 3 = 0.100 and q 4 = 0. Furthermore, as listed in Table 1, Sample 9 comprises 9.4 wt% TTP without a fluorine atom attaching to the aromatic rings; and Sample 10, 4.7 wt% TTPF and 4.7 wt% TTP. They exhibit BP temperature ranges of 0.0°C and 47.0°C, respectively, which are both distinctly narrower than that of Sample 3 comprising 9.4 wt % TTPF (63.0°C). In fluorinated molecules, the excellent stability of the C-F bond as well as the small size and low polarizability of the fluorine give rise to very low intermolecular dispersion interactions, which result in subtle modifications regarding fundamental chemical and physical properties of the liquid crystals, such as the melting point and transition temperatures, as well as dielectric, optical and viscoelastic properties 26 . Therefore, TTPF exhibits a more extensive LC phase range and better intermiscibility than those of TTP, and their transition temperatures are shown in Supplementary Figs. S29 and S30, respectively. Notably, the weak interactions 26 from fluorinated molecules may play a beneficial effect in stabilizing the cubic nanostructures of the BPs. The electro-optic performances of the BP materials were investigated in a non-oriented IPS cell with a cell gap of 5.2 μm, indium tin oxide electrode width of 4 μm and an electrode gap of 5 μm. The voltage-dependent transmittance (VT) curves and the electro-optical response times of the BP samples were measured with a He-Ne laser λ = 633 nm at 20°C. The VT curve of Sample 6 shows that the increase in the transmittance is very slight when V rms < 30 V (Fig. 2h). To improve the performance, an analog of TTPF with three terminal groups of fluoro substituents named TP2FTF (Fig. 1e) was used. Compared with Sample 6, 4.7 wt%, 9.4 wt% and 14.1 wt% TTPFs were replaced by TP2FTF in Samples 11-13, respectively. The electro-optical performances of the samples improve observably upon increasing the concentration of TP2FTF; however, the BP range decreases distinctly because of the decrease in the clearing point. Therefore, the proportion of TPFOn with a high clearing point was simultaneously increased in Sample 14 when the concentration of TP2FTF was further increased to 18.8 wt%. Sample 14 exhibits a stable BP state from −38.8°C to 83.5°C, and the voltage is~2 0 V rms at the maximum transmittance, which is less than half of that of Sample 6 (approximately 50 V rms ), as shown in the V-T curve of Fig. 2h. Comparing the electro-optical response times of Sample 6 with those of Samples 11-14 shows that the rise time decreases from~1.62 ms to 0.493 ms upon the increase in TP2FTF with large polarity (Fig. 2i). However, the decay time slightly decreases at~6-7 ms because of the high viscosity of the material system. From the above, we can believe that the system will be endowed with better electro-optical performance when a more appropriate combination of materials is constructed in the future. Discussion It is well known that the BCC lattices of BP I are packed by DTCs (O 8+ structure) and topological defects (O 8− structure) 3 , and the existence of high energy defects (Fig. 3a, e) can destabilize BP cubic superstructures 2 . The stability of BPs is usually dependent on a well-arranged DTC structure and suppression of defect free energy 12 . Here, the free energy was suppressed by the uniaxial mesogens filling the defect regions of the cubic lattices (Fig. 3c, d). Then, the biaxial dimers and uniaxial mesogenic molecules together construct the stable DTC structure (Fig. 3b). Moreover, the weak interactions between the fluorinated LC molecules are indispensable for forming ultrastable BPs. Effectively filling the defect cores and reducing the total free energy is the most effective strategy for stabilizing BPs 1,7 . The theoretical understanding of BPs is based on the Landau-de Gennes theory 3 , and the free energy density is written as The first two terms represent the gradient free energy density f grad , and the last three terms represent the bulk free energy density f bulk ; these five terms constitute the full free energy density f full . To assess the stability of a BP with filled defects, it is necessary to compare the full free energy f full of a BP with that of a guest component filling the defects with the Ch phase in the same state. The free energy density profile f(r) can be obtained from the calculated order-parameter profile of BP I at a given temperature, and to minimize the free energy, the region of volume fraction ϕ with a higher free energy density f h in the BP I should be replaced by the guest component with a lower free energy density f l of the same volume fraction. A relevant inference was achieved by Fukuda 27 , and the f full of a BPLC and a ChLC with a guest component filling the defects was given as F BP and F Ch, respectively. where Ω full and Ω guest represent the full region and the one replaced by the guest component, respectively; ϕ is a guest component of the volume fraction; f guest is the free energy density of the guest component; σ is the interfacial energy; and s is the area of the interface per unit volume. From Eqs. (2) and (3), the free energy difference per unit volume between BP and Ch in the same state is given by Here, the free-energy difference per unit volume primarily depends on the guest component of volume fraction ϕ and the interfacial energy per unit volume σ. Fukuda computationally determined that the temperature range of stable BP I can be larger than 60°C by introducing a guest component of volume fraction less than 10% when σ is 10 −5 J m −2 . Kikuchi et al. broadened the temperature range of a stable BP I to larger than 60°C by introducing~8 wt% polymer networks 1 , which conforms to the above theory. The above results show that choosing suitable materials, which lead to low interfacial energy with the LC molecules in the DTCs, to fill the defects in the cubic lattice and reduce the total free energy is crucial for the stability of the BPs 27 . In the case of defects filled by polymers or nanoparticles, the free energy gain due to the replacement of the defect core region is sufficiently large to overcome the free energy loss from interfaces, so the stable range of BPs is increased 1,7 . In our system, the interfacial energy σ between the dimeric LCs and rodlike LCs is undoubtedly lower than that between the LCs and polymer networks in PSBP. The proportion of the rodlike LCs as the guest component exceeds 30 wt% in our system, and the volume fraction ϕ exceeds 15% even if half of the rodlike LCs remain in the defect region. According to the above theory, there is a tremendous gain in the temperature range of a BP. Therefore, a BP with a temperature range of over 100°C can be obtained successfully in our system. Molecular engineering aims to guide the assembly of atomic and molecular constituents into organized complex artificial materials with nanometre-sized precision and advanced functionalities 28 . Establishing a unique system for facilitating the formation of BPs via molecular design has become effective exploitation of molecular engineering, and the temperature range of BPs has been successfully broadened from a few degrees centigrade to tens of degrees centigrade. Here, the development course of low molecular weight BPLCs is summarized in Fig. 4. To date, all the methods to broaden the temperature range of BPs only by designing molecules to facilitate the formation of DTC structures are insufficient for obtaining a BP system covering the working temperature range of practical LC devices, as shown in Fig. 4. Our approach of engineering the synergistic self-assembly of molecules with distinct configurations to stabilize BPs can overcome this bottleneck, and it has been preliminarily proved to be effective for other systems in addition to the dimeric molecules system. For example, the effect of TTPF on broadening the BP temperature range also works with the commercial SLC-X previously reported 12 , as shown in Supplementary Fig. S36. We believe that better combinations of materials will be constructed in the future, which can be tailored with targeted properties satisfying different application requirements. In this work, we successfully developed an ultrastable BP system from the molecular synergistic self-assembly of molecules with distinct configurations, which can be endowed with desirable properties by different material combinations for application requirements. Based on these results, we think this molecular synergistic self-assembly of a multi-component system may also be an extremely attractive approach to developing other multifunctional soft nanomaterials. Methods Materials. The chiral dopant R5011 is a commercial product (HCCH), and other materials are synthesized in our lab. The syntheses were performed through the Suzuki coupling reaction and Williamson etherification. The detailed syntheses and characterizations of the materials are shown in the Supporting Information. All chemicals and solvents were purchased from commercial suppliers and used without further purification. Measurements. The optical textures of the samples sandwiched between two glass substrates that contained 20.0 µm-thick polyester spacers were measured using a polarizing optical microscope (Carl Zeiss, AxioVision SE64) equipped with a hot stage with an accuracy of 0.1°C (Linkam LTS420). Reflection spectra were taken using a fibre spectrometer (Avantes, AvaSpec-ULS2048) with a white light source. The phase transition temperatures were investigated via DSC (Perkin Elmer Pyris 6) at a rate of 5.0°C/min. The dielectric properties were characterized via the resonance method with an impedance analyzer (HP 4294A, Agilent Technology, USA) according to IEEE standards. The EO performances were studied by applying a 1 kHz AC electric field across the sample, and the response time was detected using a photoelectric converter connected to an oscilloscope. Sample 8 used for SAXS measurements was composed of 43.5 wt% BPFOn, 17.6 wt% TPFOn, 18.8 wt% TTPF, 14.1 wt% DITPF, 2.0 wt% R5011, and 4.0 wt% BDH 1281 and then sandwiched between 300 μm-thick substrates of polyimide films. To obtain a better optical texture (Fig. 2c), the sample was first cooled from 80.0 to 65.0°C at 0.05°C/min and then from 65.0°C to room temperature at 0.5°C/min. The SAXS measurements were performed with a Xeuss 2.0. The camera length was 2.4947 m, and the X-ray wavelength was 0.1542 nm. The samples were exposed to X-ray radiation for 30 min three times, and then the SAXS measurement was averaged to obtain the SAXS data. The WAXD measurements were performed at Seuss 2.0 SAXS instrument of Xenocs. Phase state and corresponding transition temperatures of LCs, as well as disclination line distance changes with the temperature to HTP of the used chiral dopants, were measured by polarizing optical microscope (Carl Zeiss, AxioVision SE64) equipped with a hot stage with an accuracy of 0.1°C (Linkam LTS420). The phase transition temperatures were investigated by DSC (Perkin Elmer Pyris 6) at a rate of 10.0°C/min. The elastic constants (K) of the system were obtained by the measurements through the use of Instec ALCT instrument. Data availability Data supporting the findings of this study are available within the article (and its Supplementary Information files) and from the corresponding author upon reasonable request. Received: 4 April 2020; Accepted: 24 January 2021; Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/.
5,275.2
2021-03-04T00:00:00.000
[ "Materials Science" ]
Al-Sn NANOSTRUCTURED COATINGS ON ALUMINUM SURFACES USING ELECTROSPARK ALLOYING AND THEIR WEAR BEHAVIOR : At electrspark alloying (ESA) of aluminum surfaces using Al-Sn tool-electrode (TE), nanostructuring of manufactured surfaces take place owing to the formation of SnO 2 nanofibers. Examining the tribological properties of these surfaces in a friction couple with a counterbody made of hardened steel showed that the wear of counterbody during the friction in the oil and at dry friction exceeds by an order of magnitude and above it the wear of such surfaces. INTRODUCTION At electrospark alloying (ESA) take place removal of material from the surface of the anode and its transfer to the cathode in the air.The base of this method of the treatment is local melting of the surface in conditions of electrospark discharges [1,2].It has been show [3,4], that if the anode is a mechanical mixture of the low-melting component in a refractory matrix (for example, solid solution of Sn in aluminum matrix), the formation of nano-and microfibers fusible component in the surface layer after treatment take place at ESA.Since ESA carried out in air, the resulting nano-(micro-) fibers of low-melting component are oxides.As a resultat at ESA by TE from Al-Sn alloy in the surface layer after treatment formed of tin oxide nanowires [5]. The methods of obtaining of these surfaces at ESA and the investigation of their durability are presented in our study. Composition and structure of the TE: physical bases of obtaining the nanowires Fig. 1 shows a diagram of the state for Al-Sn binary system.One can see that at room temperature (up to the tin melting point of 228°C, the material used as a TE (AlSn20) must be an aluminum matrix with liquid tin metal dispersed in it.The TE is really an aluminum matrix with tin particles with a size of 3-5 µm dispersed in it [3].The cause for the formation of the nanowires at ASE by TE of AlSn alloy is specific character of the AlSn state diagram at temperatures that are higher than the tin melting point (228°C), but lower than the melting point of aluminum matrix (Fig. 1).In this case, the system represents melted particles of dispersed tin (tin oxide, because treatment take place in air) being in a solid matrix of aluminum.The transfer of these particles to the interelectrode gap occurs due to the pondermotive forces that deform the surface of melted drop if the surface tension force of the melt-air system is sufficiently low for the melted particles.As a result, the wires with a diameter of ~ 1µm (and lower) are formed (Fig. 2).Effects of this kind must be not only for the Al-Sn systems but also for any other systems that, at certain temperatures, are a system of melted particles in a solid matrix, for example the Al-Pb system at ASE. Methods of electrosparking machining The 8 mm diameter Al-Sn rods were used as the TE.They were doped with Cu (~1 wt %) and Ti (~1 wt %).The alloy of the required chemical composition was melted in a graphite crucible with the use of inductor of a high frequency.The melt was filled then into a specially made chill to obtain ∅8 × 50 mm rod that served as the TE. To obtain an alloy of a preset composition pure aluminum and tin were used.Doping components were introduced as intermediate alloys (50% Al + 50% Cu) and (90% Al + 10% Ti). An ALIER 31 installation served as the power source for the electrospark plating.A peculiarity of this device is that the frequency of the generated pulses is not directly connected with the vibration frequency of the TE but is set independently.The frequency depends on the energy in a pulse.The operation mode 5 of that installation was used in the present study.The frequency of the preset pulses was ~0.1 kHz.This was reached using a special regulator of frequency (the energy coefficient).In order to produce nanofiber structures under controllable conditions of electrospark plating and determine the optimal modes, an experimental facility was developed for a mechanized coating with a wide range of parameters (Fig. 3).Standard vibrogenerator 2 of ALIER 31 was fixed on a vertical milling head of a milling machine so that it could perform oscillating movements with the adjustable amplitude and frequency in the direction perpendicular to the movement of sliding carriage 5 on which sample 4 was fastened with screws.The amplitude was regulated by means of a special drive cammounted on a vertical shaft of the machine with a possibility of adjusting it in the range of 1 to 10 mm.This made it possible to produce a track of coatings of different widths on the specimen. The frequency of the transverse oscillations was ensured by the rotation speed (from 20 to 150 rot/min) of the vertical shaft.Direct current engine 6 was used as the drive of the shaft. Specimen 4 was fixed by screws on horizontal sliding carriage 5 of the facility.A special drive 6 (a constant current electric engine plus a reducer) was used to move it along the guide ways.Such device allowed plating of some layers step by step during every forward and backward movement of the carriage.The adjustable power supply allowed controlling the drive speed.Due to this device configuration, the TE fixed on vibrogenerator moves relative to the specimen with a constant preset feed rate.The feed movement of the sliding carriage was regulated in the range of 0.2-6.0mm/s.In addition to the indicated mechanical parameters (the specimen feed rate, amplitude and frequency of the transverse oscillations of the vibrator), the ALIER31 installation itself made it possible to vary the mode of operation, the energy coefficient and amplitude of the electrode vibration. Thus, the experimental facility provided an opportunity to vary modes by 6 parameters at the automation (hence, with stabilized parameters) coating process. After the milling cut the specimens surfaces were polished with an abrasive cloth strips to diminish roughness before the ESA coating.After polishing and marking the specimens were weighed on a VLR 200 analytic balance, and after wards they were fixed with two screws on the sliding carriage of the experimental facility.The electrode from the Al-Sn alloy (~20 mass % of Sn, manufactured by the above method) was also preliminary weighted and fastened in the vibration generator of ALIER31. The required operation modes were displayed on the control panel of the experimental facility, i.e., the energy coefficient, amplitude of the generator, mode of operation, and the amplitude and frequency of the transverse oscillations.When the facility was switched, on the mechanized coating the process started due to the sliding carriage reciprocal movement.After plating one or two layers, the specimens and electrodes were weighted again to determine the material gain or loss of the specimen or the electrode.Two methods of coating deposition were used. According to method 1, a constant number of coated layers at every preset speed of the TE movement relative to the specimen were deposited (at this case the effective time of coating deposition depends on the feed rate).Method 2 presupposes the usage of a constant value of the "supplied charge" or an amount of energy introduced into the discharge gap during the deposition process (in the present study it is refered to as the method "with constant energy amount").Using method 2, the number of the plated coatings was increased proportionally to an increase in the TE feed rate. According to method 1, the constant number of the deposited layers were 4 at various TE feed rates with respect to the specimen; the rates varied in the range of 0.3-2.0mm/s: the TE traveled as many as four times with a preset rate along the entire surface under treatment.In the case of deposition by method 2 four layers were deposited (the TE traveled 4 times along the surface) at the speed of 0.5 mm/s of the TE displacement and 16 layers at the feed rate of 2 mm/s; 8 and 12 layers were deposited at the feed rates of 1.0 and 1.5 mm/s, respectively. The results of the weighting were used to determine a specific deposition rate G in mg/(scm 2 ) which, depending on conditions, could be either of a positive or of a negative value; in the latter case, the weight of the specimen after the treatment did not increase but rather decreased.The total time of the treatment and the overall area of the modified surface were taken into account to calculate G. The surfaces of the specimens (prior to and after the treatment) were also examined in order to study the morphology and the elemental composition using the scanning electron microscopya TESCAN scanning microscope with an INCA Energy EDX (Oxford, Great Britain) attachment for the surface elemental analysis. To determine the surfaces roughness of the specimens (parameter Ra) and their profiles a Surtonic profilograph-profilometer (Taylor Hobson, GB) was used.The measurements were performed at 12 points along the preset length of tracing of 12.5 mm.The average value of Ra and a standard deviation were cal culated based on the obtained measurements.The wear tests were carried out using a friction machine with a reciprocal movement (from the Institute of Applied Physics of the Academy of Sciences of Moldova [2,5]). In the first variant of treatment (by hand) was obtained only the weight gain of the specimens.In the second variant of specimen treatment used in the study the number of the TE passes along the treated surface were changed at changing the TE feed rate. Fig. 4 demonstrates the dependences of a specific deposition rates on the TE feet rate across the surface under treatment.It is seen that at different methods of treatment (with a constant number of layers (method 1) and with constant energy amount (method 2) a specific deposition rate remains to be constant in a certain interval of the TE feed rates. A transition from method 1 to method 2 of the treatment changes only the "critical" value of the TE feed rate, when we can notice the transation from ESA mode which yields the weight gain of the specimen under treatment (let us call it mode I) to the mode where the mass loss occurs after ESA (we shall refer to it the method of "sparking" or mode II, which means modification of the surface under sparks action without noticeable weight gain) -Fig.4. Wear resistance of the coatings after ESA by the Al-Sn alloy. ESA by hand The study of the mechanical properties of the surfaces developed with the formation of nanofibers from low-melting component was carried out using friction machin (see before).Counterbody was plate from the hardened steel St.45 with microhardness 650 ±50 kgf/mm 2 .The couterbody performed a reciprocal movement relatively the specimen under study at a 45 double movements a minute.The length of the working surface that contacted the counterbody was 48 mm.The weight loss measurements were performed both for the counterbody (∆U cb ) and for the tested specimens (∆U).A degree of the wear was estimated both in the absolute and in the relative (∆U cb /∆U = K) values.The testing was both at dry friction and lubricated friction.Test experiments were performed in two stages.At the first stage grinding-in of the counterbody and the test surface was performed.It was carried out during ten hours of testing with load changing from 2 to 9 kg (a contact area of tested surface with the counterbody was 9 mm 2 , and the overall area of the treated surface against which the counterbody was performing a reciprocal movements was 165 mm 2 ). The grinding-in at the initial and final loads was done during two hours and the intrmediate loads during one hour.At the second stage, the main test experiments were performed at 9 kg load during 20 h.Before and after every test the surface roughness (Ra The obtained results of the tests for surfaces, manufactured ESA by hand, are integrated in the diagrams of Fig. 5.As is seen, in contrast to be plating with Al-Sn alloy (i.e. in the conditions under which nanofibers of low-melted component of the TE are formed owning ESA) during Al and Sn TE plating at ESA, the wear resistance coatings cannot be manufacturing (Fig. 5). In addition, it seems obvious that a crucial role in the observed effect of the extremely high degree of the counterbody wear made of hardened steel during its contact with the plated surface belongs to the formation not of tin nanofibers but rather of tin oxide nanofibers, since during the manufacturing of coatings in an argon atmosphere this effect is not observed (Fig. 5b, see also Fig. 2). Wear resistance of the coatings after ESA by the Al-Sn alloy. ESA by automation The machinig in conditions of ESA by automation take place in two variants [6]: mode I (the weight gain of the specimen under treatment) and mode II (method of surface modification without the weight gain, "sparking" of the surface).Results, presented on Fig. 4 and 6, shows that transition from mode I to mode II is accompanied: (1) the weight loss after the treatment for the both the specimen and the TE; (2) the decrease in the surface roughness. Fig. 2 shows that nanofibers are also formed during the automatic coating process.In certain cases they are even of micrometric sizes (Fig. 2c, d).However, in this case (the formation of "thick" fibers) it is possible to determine their elemental composition more precisely (Fig. 2e).It is obvious that the stoichiometry of the fibers is such that the tin concentration is substantially higher than corresponding SnO 2 . The evident reason for this phenomenon is pulling the tin fibers with oxidized surface out of the tin alloy melt under the action of an electric discharge.The fiber core is tin.It is apparent that the ratio of tin and the surface oxide in the fiber will depend on the both the size of the melted drop (i.e. the diameter of the As is follows from fig 7, the excessive wear of the counterbody relatively to the treated surfaces at dry friction is observed for the specimens obtained by method 2 in all cases (both mode I and mode II).For the surfaces manufacturing in the "sparking" mode the K value is maximal, and the counterbody wear exceeds the wear of the ESA surface almost by an order of magnitude.For the method 1 (constant number of deposited layers) K value depend from TE feed rate.At low TE feed rates K˃˃ 1.At TE feed rates ~ 1 and more K ≤ 1. The abnormal wear of counterbody from hardened steel at friction with nanostructuring surfaces, obtained by ESA TE from Al-Sn take place both at dry friction (Fig. 7) and lubrication friction (Fig. 8).In latest case K can be equally ∞ (Fig. 8).This means that material of the counterbody from hardened steel at friction "smeared" as a butter on a nanostructured surface, obtained by ESA. Earlier it was shown that the observed effect of abnormal wear of the counterbody made of hardened steel that worked in the friction couple with the coatings under study (but manufactured in the manual mode), increases with the increase of the surface roughness [3,5]. The results of this investigation could answer the question if the fact of high Ra values that are reached under the ESA is a necessary condition for the observed effects.The results presented in Fig. 7, 8 allow us to infer: a) under automatic deposition of the electrospark coatings (electrospark surface modification) the effect of abnormal wear of the hardened CONCLUSIONS Surfaces nanostructuring as the result of the ESA with TE made of the alloy containing the infusible matrix with an easily fusible component as a mechanical mixture have unique abrasive properties.Both at dry friction and a lubrication friction the excessive wear of the counterbody made of the hardened steel take place in the friction couple with such surfaces.Abnormal wear of countrbody at friction with these surfaces take place owing to the formation of SnO 2 nano-(micro-)fibers at ESA with Al-Sn TE. Fig. 1 . Fig. 1.State diagram for the aluminum-tin system Fig. 2 . Fig. 2. Morphology of surface layers manufactured using method 1 at TE feed rate of 0.6 mm/s (a, b), 1 mm/s (c, d), EDX spectrum and elemental composition of the surface segment (e) marked in Fig. 2d Fig. 4 . Fig. 4. Dependence of ESA deposition rate on TE feed rate for specimens obtained by method 1 (constant number of layers) and method 2 (constant amount of energy): region Imode with mass gain; region IImode with mass loss Fig. 5 . Fig. 5. Relative wear of the surfaces during the ESA in air (a) and the comparison of the surfaces' wear (b) during the treatment in air (1) and in an argon atmosphere (2)the numbers in Fig. 5(a) correspond to the modes of the treatment of generator ALIER-31; upper arrows (a) shows testing results for surfaces after polishing (∆U cb /∆U˃1) and without ESA (∆U cb /∆U<1) parameter) was measured.These values for the different conditions of coating deposition are shown in Fig. 6, 7 (for ESA by automation).The obtained results of the tests for surfaces, manufactured ESA by hand, are integrated in the diagrams of Fig. 5.As is seen, in contrast to be plating Fig. 6 . Fig.6.Ra dependence on TE feed rate for method 2 treatmentashed area is region of transition from mode I to "sparking" mode (mode II) Fig. 8 . Fig. 8.The effect of the TE feed rate against surface treated on the value of a counterbody relative wear at lubricated friction with the ESA treated surfaces after the grindingin (1) and after two stages of testing (2) steel counterbody is also observed (Fig 7, 8); b) the ESA surface roughness is insignificant for the abnormally high counterbody wear; c) this effect does not depend on whether the deposition of the layer occurs (i.e. the ESA itself, mode I) or the treatment leads to the weight lost of trated surface (at the "sparking" mode used) (Fig. 7); d) both at dry friction and a lubrication friction abnormal wear of counterbody from hardened steel take place in the friction couple with such surfaces.
4,212.6
2018-07-24T00:00:00.000
[ "Materials Science", "Engineering" ]
Human motion correction and representation method from motion camera : Motion estimation is a basic issue for many computer vision tasks, such as human – computer interaction, motion objection detection and intelligent robot. In many practical scenes, the object movement goes with camera motion. Generally, motion descriptors directly based on optical fl ow are inaccurate and have low discrimination power. To this end, a novel motion correction method is proposed and a novel motion feature descriptor called the motion difference histogram (MDH) for recognising human action is proposed in this study. Motion estimation results are corrected by background motion estimation and MDH encodes the motion difference between the background and the objects. Experimental results on video shot with camera motion show that the proposed motion correction method is effective and the recognition accuracy of MDH is better than that of the state-of-the-art motion descriptor. Introduction Motion estimation and recognition is the foundation of many computer vision works, especially for object motion analysis in visible light camera. It is widely used in many applications, such as human-machine interaction, video surveillance, event retrieval and intelligent vehicles. In many practical scenes, the object movement goes with camera motion. So recognising human motion from motion camera is a hot research topic in computer-human interaction [1,2] and computer vision [3,4]. The approaches of human action recognition involve motion estimation/representation, object detection and trajectories. In most of these video analysis tasks, the motion feature is popularly used as a low-level vision feature and plays an important role. However, in real scenes, owing to the movement of the camera and objects, error exists in motion estimation, reducing the discrimination power of the motion descriptor. For motion recognition in complex scenes, especially in a camera motion environment, how to model camera motion is still an open issue. Wang and Schmid [5] estimated camera motion by matching feature points between frames and using the motion boundary histogram (MBH) to represent motion. Unfortunately, there is no clean solution to this problem. Towards this end, we propose a novel correction method for motion estimation results and a novel motion descriptor called the motion difference histogram (MDH) is calculated, which regards the background motion as camera motion. To estimate motion and compute MDH, the dense optical flow of the video is extracted via the Lucas-Kanade (LK) algorithm [6]. The maximising component is regarded as camera motion, and the real motion is the relative motion between the optical flow and camera motion. Finally, the histogram of the orientation of real motion is computed as MDH. To verify the accuracy of motion correction and the discrimination power of MDH, we use the conventional bag-of-words (BOW) model to represent the motion. The video is regarded as a set of spatiotemporal interest points (STIPs) detected by a 3D Harris algorithm [7]. MDH is used for the motion representation of STIP, and the visual word vocabulary of the action is constructed. Finally, the motion is regarded as the visual word feature, and the support vector machine (SVM) classifier is trained for motion recognition. Fig. 1 shows the motion recognition strategy of BOW model. In this work, we focus on motion estimation and representation, the BOW model is a simple pattern recognition model to evaluate the motion descriptor. The contributions of our work are threefold: (i) We propose a specific approach to estimate background/camera motion and the human motion is corrected by the difference of camera motion and optical flow. (ii) We propose a novel motion descriptor for discriminative action representation. The experimental results show that the discrimination of MDH is better than that of the state-of-the-art motion descriptor. (iii) The proposed motion descriptor is sufficiently general for other off-the-shelf vision tasks. We are open to more robustness model to replace BOW motion recognition model. Currently, we place greater emphasis on the accuracy of motion estimation and motion representation. The remaining of this paper is organised as follows. Section 2 reviews some related works. Section 3 describes the proposed method. Section 4 presents and discusses our experimental results. Finally, Section 5 concludes the paper. Related works The mainly approaches of motion estimation from camera involve in optical flow, frame/background difference method and object tracking. The optical flow method [8] tries to calculate the motion between two frames based on the optical flow constraint equation which assume the motion remain the same in very short time. Frame difference method needs a good and robustness background model. Moreover, object tracking is based on accuracy object detector and tracker. However, due to camera motion, the motion estimation of these methods is inaccuracy. In this work, we propose a new method correction method to calculate real motion from the motion estimation of optical flow. To verify the accuracy of the proposed motion correction method, motion descriptor based on motion correction results is calculated for STIP to recognition human motion. Many studies in the literature indicate that STIP is widely used in human action recognition tasks owing to its robustness and good performance. In this study, we also focus on STIP and discuss the motion descriptor of STIP. Generally, two descriptor types are used to represent motion: absolute motion descriptor and relative motion descriptor. The absolute motion descriptor is computed directly based on optical flow, such as the histogram of the orientation of optical flow (HOF) [9]. This approach is simple but inaccurate owing to background motion, especially camera motion. The relative motion descriptor receives more attention because of its good performance in human action recognition. Frequently used relative motion descriptors include MBH [5] and Internal Motion Histograms (IMHcd). In this study, we also discuss the relative motion descriptor and propose a novel descriptor named MDH. In contrast to these descriptors, MDH estimates the camera motion by maximising the statistical distribution of the optical flow. The real motion of each pixel is expressed by subtracting from the camera motion. To verify the discrimination power and effectiveness of MDH, we use the BOW model to construct the action representation based on the motion descriptor. An SVM classifier is constructed to recognise action. In this study, the emphasis is on the effective of MDH, which is indicated by comparing MBH with IMHcd. BOW is wildly used in many vision tasks. Wang et al. [10] used the K-means algorithm to create visual words, and action is expressed as the word sequence. Niebles et al. [11] used an unsupervised learning algorithm to create a visual word codebook, and actions were recognised via the probabilistic latent semantic analysis (pLSA) or latent Dirichlet allocation (LDA) algorithm. In this study, the emphasis is on the effective of motion correction and MDH, which is indicated by comparing with MBH and IMHcd. Proposed motion correction method and motion descriptor We describe the proposed motion correction method and motion descriptor for STIP as follows. Motion correction method To calculate precision motion from motion camera, it is a necessity to eliminate the influence of camera motion. Towards this end, we assume the background motion is raised by camera motion and the background motion is argued as camera motion. Thus, relative motion is a good solution. Firstly, the optical flow I is computed based on pyramidal frames structure. With camera motion, the optical flow I is the sum of object motion I r and camera motion I c where these motion vectors can be decomposed into the horizontal and vertical directions (x and y directions) as follows: where I rx indicates the object motion in the x-direction, I ry indicates the object motion in the y-direction, I cx indicates the camera motion in the x-direction and I cy indicates the camera motion in the y-direction. In the same image, the camera motion is fixed for all points. The object motion vector is estimated by solving I rx and I ry . The key is how to estimate the camera motion. However, estimation of camera motion directly from video data is still a challenging problem in computer vision. In this work, the background motion is estimated by analysing the optical flow of dense interest points. The background motion is regarded as camera motion to compute object motion. To compute the background motion, the local interest points of the image are extracted first. In this work, we use the Harris corner points as the detector and extract the optical flow of these interest points via the LK algorithm. Some examples are shown in Figs. 2a and b. The optical flow of these points is decomposed into the x and y directions. The value is divided into ten intervals. The distribution of points is then accumulated. Each value in the interval indicates the number of points. Examples are shown in Fig. 2c. The maximisation of the histogram is regarded as background motion (camera motion) because the overwhelming majority of the movement points are caused by camera motion, and the movement patterns of these points are consistent. The background motion pattern is shown in Fig. 2d. The relative motion can be estimated by using (2). Motion descriptor and recognition method After precise relative motion estimation, to evaluate the effective of motion correction and motion representation, we use the relative motion feature to recognise human motion. A new descriptor named motion difference histogram (MDH) is computed in the spatiotemporal domain of STIP. The domain is divided into 3 * 3 * 2 cells, and the histogram of the orientation of relative motion is computed in each cell. The angles of 0 • − 360 • are divided into nine intervals. Finally, by combining the histograms of these cells, the dimension of MDH is 3 * 3 * 2 * 9 = 162. The computational process of MDH is shown in Fig. 3. To recognise human action, the video is represented as a histogram feature of a visual word dictionary. To create a visual word dictionary, we use the K-means algorithm for each category based on STIP and the motion descriptor. The length of the dictionary in each category is k. Finally, the dictionary is After computing the video feature, the SVM classifier is trained for action recognition. In this work, the RBF kernel is used to train and predict the SVM classifier where H i and H j are the features of the video (visual word histogram), and s 2 is estimated by cross-validation. Dataset and parameter setting In this study, we discuss the motion correction and motion descriptor method in camera motion scene. The accuracy and effective of the proposed motion correction and descriptor method are verified in human motion recognition challenge. The method is designed based on YouTube dataset [12], which contains 11 actions (C = 11): 'basketball shooting', 'biking/cycling', 'diving', 'golf swinging', 'horseback riding', 'soccer juggling', 'swinging', 'tennis swinging', 'trampoline jumping', 'volleyball spiking' and 'walking with a dog'. All of the video in this dataset are collected from the YouTube website. This is a challenge owing to the large variations in camera motion, object appearance, object pose, object scale, viewpoint, background clutter and illumination conditions. Each action has 25 subjects (S = 25) containing more than 4 different environments (E ≥ 4) for a total of 1599 videos. Fig. 4 presents some examples from YouTube dataset. Performance evaluation of human motion recognition To verify the accuracy of motion correction and the discriminative of the proposed descriptor, we compared MDH with MBH, HOF [9] and IMHcd in the YouTube dataset. In our experiment, we used 25-fold leave-one-out cross-validation to measure the performance of the proposed method. In each round, one subject is selected as testing data N test = C * E, and the remaining are the training data, for a total of C * E * (S − 1). To create the dictionary, the cluster number is set at k = 20. The accuracy is the average of 25 rounds. The comparison result is shown in Table 1. Bold values indicate results of the proposed and best results From the comparison, we can find that compared with HOG and IMHcd, the improvement of MDH is more than 2%, and it is also better than MBH. Moreover, according the theory of feature descriptor in human motion recognition, the appearance feature combined with motion feature has better performance. In the experiment, we also compared the motion combine with appearance feature in Table 1. In Table 1, HOG (histogram of orientation of gradient) is the appearance feature, HNF means the HOG feature combined with HOF. Moreover, HOGNMDF means the HOG feature combined with MDH. From the result, the performance of HOGNMDF is better than HNF, the improvement of HOGNMDF is more than 2%. As mentioned in Section 3.2, the cluster number is sensitive to recognise performance. In the experiment, we discuss the cluster number k for action recognition. The value of k is set 20-150. Moreover, we have comprised the performance of HNF and HOGNMD feature. The experimental result is shown in Table 2 and Fig. 5. From Table 2 and Fig. 5, we can find that the accuracy of HNF feature at k = 100 and k = 150 are 58.23 and 58.23%, respectively. The accuracy of HOGNMDH feature is 61.42 and 65.81%. The improvement of MDH at k = 100 and k = 150 is 3.19 and 7.89%, respectively. It verifies the effective of motion correction further. At the same time, there are almost have no improvement of HNF feature while the cluster number k increase from 100 to 150. Finally, the confusion matrix of the HNF feature and HOGNMDH is shown in Figs. 6a and b. Conclusions In this study, we propose a novel motion correction method and motion descriptor called MDH. In MDH, the camera motion is estimated, and relative motion is computed by the motion difference between the optical flow and the camera motion. To verify the effective of the proposed motion correction method, the MDH is built to recognise human motion. Experimental results by comparison with other relative motion descriptors show that the proposed descriptor is effective in motion description with camera movement. The motion correction method is useful to estimate real motion in camera movement scene. MDH is generally for other action recognition approaches and other vision tasks. In the future, we will use more a robust and discriminative action recognition approach to achieve better performance. Acknowledgments The work was supported by the Nature Science Foundation of China (no. 61502182), the Natural Science Foundation of Fujian Province of China (nos. 2014J01249, 2015J01253).
3,358.6
2017-07-01T00:00:00.000
[ "Computer Science" ]
Synthesis of pro-apoptotic indapamide derivatives as anticancer agents. Abstract 4-Chloro-3-({[(substitutedamino)carbonothioyl]amino}sulfonyl)-N-(2-methyl-2,3-dihydro-1H-indole-1-yl)benzamide (1–20) and 4-chloro-3-({[3-(substituted)-4-oxo-1,3-thiazolidine-2-ylidene]amino}sulfonyl)-N-(2-methyl-2,3-dihydro-1H-indole-1-yl)benzamide derivatives (21–31) were synthesized from 4-chloro-N-(2-methyl-2,3-dihydroindol-1-yl)-3-sulfamoylbenzamide (indapamide). 4-Chloro-3-({[(4-chlorophenyl) amino) carbonothioyl]amino}sulfonyl)-N-(2-methyl-2,3-dihydro-1H-indole-1-yl)benzamide 12 demonstrated the highest proapoptotic activity among all synthesized compounds on melanoma cell lines MDA–MB-435 with 3.7% growth inhibition at the concentration of 10 µM. Compound 12 (SGK 266) was evaluated in vitro using the MTT colorimetric method against melanoma cancer cell line MDA–MB435 growth inhibition for different doses and exhibited anticancer activity with IC50 values of 85–95 µM against melanoma cancer cell line MDA–MB435. In addition, this compound was investigated as inhibitors of four physiologically relevant human carbonic anhydrase isoforms, hCA I, II, IX and XII. The compund inhibited these enzymes with IC50 values ranging between 0.72 and 1.60 µM. Introduction In the progress of novel drug discovery, the easiest and most effective way is to work with drug substances with known activity and molecular structure. Basic method is to synthesize the new analogs and homologs of drug substance chosen as precursor with proven biological activity and molecular structure. In this way, it is possible to approach novel drug substances with the same or different biological activity. Moleculer modification offers the researchers the opportunity of improvement in properties of biological efficiency, mechanism of action, administration pathway, toxicity and stability. If it is not possible to obtain drug candidates from precursor drugs, the knowledge will be gained for the synthesis of novel drug substances which could be a key role in research area. Molecular modification methods give the chance to obtain active pharmaceutical ingredients with several pharmacological activities. One of the most significant examples is sulfonamide derivatives which have several biological activities ( Figure 1). Sulfonamides represent one of the classical chemotypes associated with potent carbonic anhydrase (CA) inhibition 1 . CA I and II, CA isoforms, are rather abundant in many tissues and participate in important physiological processes 2,3 . Indapamide, 4-chloro-N-(2-methylindolin-1-yl)-3-sulphamoylbenzamide, have been discovered in the 1960s-1970s, when little was known about the various CA isozymes. This drug was a much weaker one (KI of 2520 nM) inhibitor against CA II 4 . General All chemicals were purchased from Merck (Darmstadt, Germany), Sigma-Aldrich (St. Louis, MO) or Fluka (Buchs, Switzerland). Melting points were determined with a Barnstead melting point apparatus (Barnstead/Electrothermal 9200). Infrared (IR) spectra (KBr disc) were obtained with a Perkin Elmer Spectrum One (Waltham, MA). 1 H NMR and 13 C NMR spectra in DMSO-d 6 were recorded on a BRUKER AVANCE-DPX (Billerica, MA) spectrometer (400 MHz) and chemical shifts are given in ppm downfield from tetramethylsilane (TMS) as an internal standard using DMSO-d 6 as solvent. Data are reported as follows: chemical shift, multiplicity (br.: broad singlet, d: dublet; m: multiplet, s: singlet and t: triplet), coupling constants (Hz), integration. Elemental analyses were performed on Flash EA 1112 series elemental analyzer (Thermo Finnigan, Italy). Mass spectra were measured on a JMS-700 double-focusing mass spectrometer (JEOL, Akishima, Tokyo, Japan). Follow up of the reactions and checking the purity of the compounds were made by TLC on silica gel protected aluminium sheets (Type 60 F 254 , Merck 1.05550.0001) (Darmstadt, Germany), and the spots were detected by means of UV lamp at ¼ 254 nm. Biology Primary anticancer assay was performed in accordance with the protocol of the Drug Evaluation Branch, National Cancer Institute, Bethesda [8][9][10][11] . The human tumor cell lines of the cancer screening panel were grown in RPMI 1640 medium containing 5% fetal bovine serum and 2 mM L-glutamine. For a typical screening experiment, 100 mL of cells were inoculated into 96-well microtiter plates at plating densities ranging from 5000 to 40 000 cells/well depending on the doubling time of individual cell lines. After cell inoculation, the microtiter plates were incubated at 37 C, 5% CO 2 , 95% air and 100% relative humidity for 24 h prior to addition of experimental drugs. The cytotoxic and/or growth inhibitory effects of the compounds were tested in vitro against the full panel of 60 human tumor cell lines derived from nine neoplastic diseases at 10-fold dilutions. The percentage of growth was evaluated spectrophotometrically versus controls which were not treated with the test agents. Briefly, effect of the compounds on the growth parameters of the different cancer cell lines was evaluated relative to equivalent amounts of DMSO treated controls and expressed as percent growth rate. The compounds were added at 10 À5 M concentration for 48 h. 15 . The MTT metabolic assay was carried out at the seeding density of 1 Â 10 4 cells/well in 96-well flat-bottom cell culture plates with 100 mL of opti-MEM (invitrogen, USA). Following 24-h incubation at 37 C, 5% CO 2 , media was aspirated, compounds were dissolved in DMSO and diluted with medium before addition to the cell cultures at the concentrations of 5 and 10 mg/mL. Cells were incubated for 48 h at 37 C, 5% CO 2 . After the incubation period 10 mL of the MTT labeling reagent [final concentration 0.5 mg/mL (Cell proliferation kit MTT, Roche, Germany)] was added to each well. Samples were incubated for 4-12 h in a humidified atmosphere (e.g. 37 C, 5.0% CO 2 ) and 100 mL of the solubilization buffer was added into each well. The plate was allowed to stand overnight in the incubator in a humidified atmosphere (e.g. 37 C, 5% CO 2 ) and the formazan precipitates were then solubilized. Absorbance of the formazan product was measured spectrophotometrically at 550 and 690 nm. Statistical analyses were done using unpaired Student's t-test using Prism 3.0 (GraphPad Software, San Diego, CA). TUNEL assay Terminal deoxynucleotidyl transferase dUTP nick end labelling (TUNEL) staining was performed on MDA-MB-435 cell line. Cells were cultured in DMEM supplemented with 4.5 g/L glucose, 10% heat-inactivated fetal bovine serum, 100 units of penicillin/ ml and 100 mg of streptomycin/ml at 37 C in a humidified atmosphere of 5% CO 2 in air. Cells were seeded into 6-well plates at a density of 1.5 Â 10 5 cells/well. Following one-day incubation, medium was replaced and adjacent wells have been inoculated with different concentrations of synthesized compound. Apoptosis was detected after 24 and 48 h, using ApopTag Plus in situ apoptosis detection kit peroxidase (Chemicon) following manufacturer's protocol with minor modifications. Apoptotic cells were observed brown after the color reaction with DAB (3,3 0diaminobenzidine), while counter staining was managed using methyl green to observe the living cells. Apoptotic cells were detected by standard light microscopy. Live and death cells were counted by two independent observers. TUNEL assay identifies early stage apoptosis by enzymatically labeling 3 0 -OH end of DNA strand breaks with modified nucleotides. Since late stage apoptotic and necrotic cells were detached in the adherent cell culture, given counts merely reflects cells in the early apoptotic stage. Flow-cytometric analysis for apoptotic cell rate by Annexin V-FITC Test was performed using ''Annexin V-FITC apoptosis detection kit'' (eBioscience) as described previously 16 . Briefly, MDA-MB-435 cells (1 Â 10 6 ) both untreated and treated with indapamide derivatives, were harvested, washed with PBS twice and suspended with binding buffer. The cells were double-stained with Annexin-V-FITC and propidium iodide for 10 min in the dark at room temperature. CA inhibition studies Phenol red (at a concentration of 0.2 mM) has been used as indicator, working at the absorbance maximum of 557 nm, with 20 mM Hepes (pH 7.5) as buffer, and 20 mM Na 2 SO 4 (for maintaining constant the ionic strength), following the initial rates of the CA-catalyzed CO 2 hydration reaction for a period of 10-100 s. The CO 2 concentrations ranged from 1.7 to 17 mM for the determination of the kinetic parameters and inhibition constants. For each inhibitor at least six traces of the initial 5-10% of the reaction have been used for determining the initial velocity. The uncatalyzed rates were determined in the same manner and subtracted from the total observed rates. Stock solutions of inhibitor (0.1 mM) were prepared in distilled-deionized water and dilutions up to 0.01 nM were done thereafter with distilleddeionized water. Inhibitor and enzyme solutions were preincubated together for 15 min at room temperature (prior to assay, in order to allow for the formation of the E-I complex). The inhibition constants were obtained by non-linear least-squares methods using PRISM 3, as reported earlier 17 and represent the mean from at least three different determinations. CA isoforms were recombinant ones obtained in house as reported earlier [17][18][19] . The 1 H NMR spectra of the sulfonylthiourea derivatives (1-20) revealed that CH 3 protons at indoline ring came out as a doublet with the integration three protons at 1.09-1.32 ppm. was obtained using electron impact ionization technique. The molecular ion peak observed at m/z 492.0680 Da was within the acceptable limit for molecular weight and the empirical formula of compound 22. The characteristic fragmentations for 4-thiazolidinone were also observed. The main fragmentation product was observed as 1-imino-2-methyl-2,3-dihydro-1H-indolium cation, giving the base peak at m/z 147.0935 Da 35,36 . It was also investigated whether the effect of compound 12 was mediated via the apoptotic pathway; and if so which apoptotic pathway was responsible for this process. This part of study was performed at the Department of Biophysics, School of Medicine, Marmara University. Adherent cell population was determined by cell count and number of apoptotic cells was detected by TUNEL assay and FACS analysis 16 . Annexin-V, a well-established technique to determine apoptosis, binds and detects translocation of phosphatidyl serines to the outer membrane, an indication of the beginning of apoptosis 37 . Flow cytometric analysis of the cell lines was performed using FITC-labeled Annexin-V and propidium iodide, a DNA-binding dye used as an indicator of DNA damage. Our results were introduced in Figure 7, where Q4 and Q2 were regions of early and late stages of apoptosis, respectively, and Q1 involved necrotic cells. In this study, compound 12 was added to the cell cultures at indicated concentrations (0, 10, 50, 80 and 100 mM) and apoptosis was determined at 24 and 48 h following inoculation. A dosedependent increase in the number of apoptotic cells can be seen as the concentration rises, which reaches to its maximum at 80 mM for 24-h period. The cell numbers were substantially reduced at 100 mM at the end of 24-h period and at 80 and 100 mM at the end of 48-h period, as serious indicators of cytotoxicity ( Figures 5 and 6). Addition of 100 mM substance was completely destructive for the cells which prevented apoptotic analyses due to extremely low cell number and completely distorted morphology (Figure 7). In addition, compound 12 have further been investigated for CA inhibition (Table 1). There was no inhibition up to 50 mM inhibitor against hCA I, whereas hCA II, IX and XII was inhibited with IC 50 of in the range of 0.72-1.60 mM, being thus a low micromolar inhibitor with a potency similar to clinically used sulfonamide acetazolamide (AAZ) in Supporting information (Figures 8 and 9). Conclusion The objective of this study was to synthesize and investigate the anticancer activity of new sulfonylthiourea or 4-thiazolidinone derived from indapamide with the hope of discovering new structure leads serving as anticancer agents. Our aim has been verified by the synthesis of two different groups of structure hybrids comprising basically the indole moiety attached to either sulfonylthiourea or 4-thiazolidinone counterparts through various linkages for synergistic purpose. From the preliminary results of cell growth inhibition study, we could conclude that the selected by NIH-NCI compounds 16, 22, 26, 28 and 30 show no significant and apoptotic cells were detected by TUNEL assay and FACS analysis. The apoptotic effect of the compound 12 was started at 50 mM following 24-h incubation. At 48-h incubation of compound 12 displayed highly toxic effects on cells as demonstrated by dramatically reduced cells number and clear deformations in shape. The obtained results clearly revealed that compound 12 derived from the indapamide exhibited better growth inhibition and apoptotic effect than their 4-thiazolidinone and other compounds. In addition, this compound was investigated as inhibitors of four physiologically relevant human carbonic anhydrase isoforms, hCA I and II (cytosolic idoforms) as well as hCA IX and XII (transmembrane, tumor-associated isoforms). As seen from data of Table 1, hCA I was not inhibited, whereas the remaining isoforms were inhibited with IC 50 -s in the range of 0.72-1.60 mM. Finally, the broad spectrum anticancer activity displayed by compound 12 will be of interest for future derivatization in the hope of finding more active and selective anticancer agents.
2,914.8
2015-02-16T00:00:00.000
[ "Chemistry" ]
Multimodality Advanced Cardiovascular and Molecular Imaging for Early Detection and Monitoring of Cancer Therapy-Associated Cardiotoxicity and the Role of Artificial Intelligence and Big Data Cancer mortality has improved due to earlier detection via screening, as well as due to novel cancer therapies such as tyrosine kinase inhibitors and immune checkpoint inhibitions. However, similarly to older cancer therapies such as anthracyclines, these therapies have also been documented to cause cardiotoxic events including cardiomyopathy, myocardial infarction, myocarditis, arrhythmia, hypertension, and thrombosis. Imaging modalities such as echocardiography and magnetic resonance imaging (MRI) are critical in monitoring and evaluating for cardiotoxicity from these treatments, as well as in providing information for the assessment of function and wall motion abnormalities. MRI also allows for additional tissue characterization using T1, T2, extracellular volume (ECV), and delayed gadolinium enhancement (DGE) assessment. Furthermore, emerging technologies may be able to assist with these efforts. Nuclear imaging using targeted radiotracers, some of which are already clinically used, may have more specificity and help provide information on the mechanisms of cardiotoxicity, including in anthracycline mediated cardiomyopathy and checkpoint inhibitor myocarditis. Hyperpolarized MRI may be used to evaluate the effects of oncologic therapy on cardiac metabolism. Lastly, artificial intelligence and big data of imaging modalities may help predict and detect early signs of cardiotoxicity and response to cardioprotective medications as well as provide insights on the added value of molecular imaging and correlations with cardiovascular outcomes. In this review, the current imaging modalities used to assess for cardiotoxicity from cancer treatments are discussed, in addition to ongoing research on targeted molecular radiotracers, hyperpolarized MRI, as well as the role of artificial intelligence (AI) and big data in imaging that would help improve the detection and prognostication of cancer-treatment cardiotoxicity. Cancer mortality has improved due to earlier detection via screening, as well as due to novel cancer therapies such as tyrosine kinase inhibitors and immune checkpoint inhibitions. However, similarly to older cancer therapies such as anthracyclines, these therapies have also been documented to cause cardiotoxic events including cardiomyopathy, myocardial infarction, myocarditis, arrhythmia, hypertension, and thrombosis. Imaging modalities such as echocardiography and magnetic resonance imaging (MRI) are critical in monitoring and evaluating for cardiotoxicity from these treatments, as well as in providing information for the assessment of function and wall motion abnormalities. MRI also allows for additional tissue characterization using T1, T2, extracellular volume (ECV), and delayed gadolinium enhancement (DGE) assessment. Furthermore, emerging technologies may be able to assist with these efforts. Nuclear imaging using targeted radiotracers, some of which are already clinically used, may have more specificity and help provide information on the mechanisms of cardiotoxicity, including in anthracycline mediated cardiomyopathy and checkpoint inhibitor myocarditis. Hyperpolarized MRI may be used to evaluate the effects of oncologic therapy on cardiac metabolism. Lastly, artificial intelligence and big data of imaging modalities may help predict and detect early signs of cardiotoxicity and response to cardioprotective medications as well as provide insights on the added value of molecular imaging and correlations with cardiovascular outcomes. In this review, the current imaging modalities used to assess for cardiotoxicity from cancer INTRODUCTION Cancer incidence is expected to increase by 50% by 2050, but over the past two decades, cancer mortality has improved in part due to earlier detection via screening and the advent of novel therapies such as tyrosine kinase inhibitors (TKI) for cancers like chronic myelogenous leukemia (CML), liver, gastrointestinal and lung cancers, as well as immunotherapy, such as checkpoint inhibitors, for metastatic disease and an expanding list of indications including triple negative breast cancer, lung cancer, melanoma, bladder cancer, and renal cell cancer (1)(2)(3)(4)(5)(6). However, with the rise of newer oncologic therapies, there have been a spectrum of adverse cardiovascular toxicities including cardiomyopathy (CM), myocardial infarction, myocarditis, arrhythmia, hypertension (HTN) and thrombosis that have been associated with these agents. More traditional cardiotoxic agents like anthracyclines (i.e., doxorubicin), one of the most widely used class of chemotherapeutics due to improved overall cancer and survival outcomes has been shown to alter myocardial energetics, promote mitochondrial dysfunction, increase reactive oxygen species levels leading to activation of matrix metalloproteases, inhibit topoisomerase IIb and cause DNA strand breaks, thereby promoting cardiomyopathy (7-9). HER2 inhibitors like trastuzumab has also been shown to increase risk of CM via antagonizing important pro survival as well as other important signal transduction pathways for metabolism in the heart (10). Platinum agents like cisplatin have been shown to increase oxidative stress and increased apoptosis and has been associated with cardiomyopathy in rare instances (11). Alkylating agents like cyclophosphamide, which can cause oxidative damage and direct endothelial cell damage have been linked to myocarditis and cardiomyopathy (12). Antimetabolites like 5 fluorouracil (5FU), which is commonly used in head and neck cancers as well as gastrointestinal cancers has been shown to increase risk of coronary vasospasm and myocardial infarction (13,14). Multiple myeloma therapies (bortezomib, lenalidomide) and vascular endothelial growth factor (VEGF) inhibitors like bevacizumab have been associated with thrombosis and hypertension by promoting endothelial cell dysfunction (15)(16)(17)(18). TKIs like ibrutinib has been associated with atrial fibrillation, Abbreviations: AI, artificial intelligence; CM, cardiomyopathy; CML, chronic myelogenous leukemia; DGE, delayed gadolinium enhancement; DNA, Deoxyribonucleic acid; ECV, extracellular volume; GLS, global longitudinal strain; ICI, checkpoint inhibitors; HER2, human epidermal growth factor receptor 2; HF, heart failure; HTN, hypertension; MI, myocardial infarction; MUGA, multigated acquisition; ROS, reactive oxygen species; TdP, torsades de pointe; TKI, tyrosine kinase inhibitor; VTE, venous thromboembolism. Of the close to 2 million patients diagnosed with cancer in 2019, it is estimated that 38.5% are eligible for ICI therapy (22,23). In addition to increased risk of myocarditis, pericarditis and vasculitis, immune checkpoint inhibitors (ICI) have been associated with increased risk of plaque rupture/acceleration of atherosclerosis and thrombosis (24). ICI myocarditis is characterized by lymphocytic infiltration with CD4 and CD8 cells and mortality is high if not identified and if left untreated (25). Newer immunotherapies may also increase risk of myocarditis, such as cellular therapies like CART and molecular inhibitors such as CCR4 antagonist, mogamulizumab, which is used to treat T cell lymphomas (26)(27)(28). However, evaluation of the earliest signs of immune cell infiltration in the myocarditis process is limited (Table 1; Figure 1). Imaging modalities like echocardiography (echo) and magnetic resonance imaging (MRI) are routinely used to monitor and evaluate for the aforementioned oncologic therapy related cardiotoxicity, with both allowing for assessment of function and wall motion abnormalities and MRI allowing for additional tissue characterization using T1, T2, extracellular volume (ECV) and delayed gadolinium enhancement (DGE) assessment. While nuclear studies like multi-gated acquisition (MUGA) scans have fallen out of favor for the evaluation of cardiomyopathy mediated by oncologic therapy due to the higher sensitivity, and availability of echo and MRI, emerging nuclear imaging using molecularly targeted radiotracers may confer more specificity and help elucidate the mechanisms of cardiotoxicity, many of which are already in clinical use for oncology purposes and thus can be adapted to evaluate their signal/role in cardiotoxicity ( Table 1). In addition to molecular targets, hyperpolarized MRI has emerged as a potential imaging modality to evaluate effects of oncologic therapy on cardiac metabolism and has reached human studies. Finally, artificial intelligence and big data of imaging modalities including electrocardiograms may be able to help predict and detect early signs of cardiotoxicity and response to cardioprotective medications once cardiomyopathy develops but also help provide insights on diagnostic and prognostic value of molecular based imaging. We review current imaging modalities used to assess for cardiovascular toxicities associated with oncologic therapies and highlight ongoing research in the areas of molecular imaging, targeted molecular radiotracers and hyperpolarized MRI as well as the role of artificial intelligence (AI) and big data in imaging that would help improve detection, prognostication of oncologic therapy related cardiotoxicity. Cardiotoxicity due to anthracycline use (often dose dependent, but can occur at any dose) are common, up to 5% with cumulative doses <400 mg/kg, but up to 20% for those treated with 700 mg/kg or more (72). HER2 inhibitor mediated cardiomyopathy can occur in 5-10% of patients and is increased when given in conjunction with anthracyclines up to 27% (73,74). Oncologic therapy mediated cardiomyopathy can be evaluated by traditional imaging modalities such as echo and MRI, which are able to evaluate wall motion, left and right ventricular function and even early signs of toxicity via changes in strain, namely global longitudinal strain (75,76 (77)(78)(79)(80)(81). Due to reduced variability compared to 2D echo, 3D echo or MRI are recommended for sequential follow up (82). In addition to being the gold standard for volumetrics and ejection fraction, MRI has additional evaluation capabilities including tissue characterization for injured cells such as changes in ECV and increased native T1 times, shown with anthracycline use and increased T2 relaxation times with anthracycline toxicity (83)(84)(85)(86). The presence of DGE post trastuzumab, a HER2 inhibitor, was associated with cardiomyopathy (87). Strain as a Predictor of Cardiomyopathy Feature tracking global longitudinal strain (GLS) was first used in echo to show that it could be predictive of future cardiomyopathy in multiple studies of cancer patients undergoing cardiotoxic chemotherapy with anthracycline or trastuzumab. For example, an increase in GLS >12 or 15% was associated with a significant drop in LVEF >10% 6 months after in several studies (88,89). MRI has subsequently shown that use of tagging, feature tracking strain or fast strain encoded (SENC) assessment are sensitive and highly accurate in detecting subclinical cardiotoxicity as evidenced by an increase in GLS for patients on cardiotoxic chemotherapy such as anthracyclines, with SENC having a higher accuracy that was less dependent on loading conditions (90)(91)(92)(93)(94). However, strain assessment in MRI is largely used in a research setting and is not routinely used in the clinical practice yet. MRI Evaluation of Adverse Immune Related Cardiac Events ICI myocarditis can occur in 1-2% of patients and has a high mortality of up to 50% if untreated (25,95). MRI has become a work horse for evaluation of immunotherapy related cardiotoxicities. In addition to T1, and ECV changes, T2 abnormalities allow for assessment of myocardial edema in patients on checkpoint inhibitors with concern for myocarditis or pericarditis and DGE, a marker of myocardial injury FIGURE 1 | Imaging modalities and evaluation of cardiotoxicities of oncologic therapies. For evaluation of peripheral artery disease (PAD) (top left), FDG, FAP and SSTR2 imaging may be able to identify vulnerable plaque, while CT and MRI can help evaluate degree of stenosis. For evaluation of thrombosis (top right), nuclear imaging may be able to identify early clot formation with radiotracers directed at fibrin or glycoprotine IIb/IIIa, and MRI can use a long inversion time to identify thrombus, as with TI600. For evaluation of cardiomyopathy/myocarditis (middle), echo and MRI can evaluate ejection fraction as well as myocardial strain. For myocarditis, MRI can evaluate tissue characteristics such as T1, T2 and DGE, which are now components of the Lake Louise criteria for myocarditis. Nuclear can evaluate for T cell infiltration using tracers targeting CD4, CD8 cells. Tracers directed against FAP, such as 68 Ga-FAPI has been shown to be increased in an animal model of checkpoint inhibitor myocarditis. Evaluation of pericarditis (bottom left), a complication of checkpoint inhibitors can be assessed by echo for detection of pericardial effusion, but with greater specificity MRI can identify edema and DGE. Atherosclerosis (bottom right) can be evaluated by traditional SPECT and PET techniques to evaluate for perfusion with stress and rest. CT coronary is now first line for evaluation of those with intermediate risk chest pain to rule out obstructive disease. Stress MRI or DGE can also be performed to evaluate for prior myocardial infarction as well as myocardial viability. or scarring is another tissue characterization parameter that can evaluate for immunotherapy toxicities. MRI is recommended by specialty society guidelines as part of the evaluation and monitoring of ICI myocarditis using the Lake Louise criteria, updated in 2018 to require both increased myocardial signal intensity ratio >2 or increased myocardial relaxation times or visible myocardial edema in T2-weighted images and increased myocardial relaxation times or extracellular volume fraction or DGE in T1-weighted images for the imaging diagnosis of myocarditis (80,(96)(97)(98)(99)(100). However, DGE is non-specific and cannot distinguish from cell damage vs. end stage fibrosis and current standard clinical imaging modalities are lacking in assessment of potential molecular correlates, such as collagen deposition and scar. Thus, molecularly targeted imaging tracers may shed light on both mechanism and help increase the specificity of cardiac imaging findings. Molecular Nuclear Imaging for Evaluation of Anthracycline Cardiotoxicity Anthracycline mediated cardiotoxicity has been associated with an increase in reactive oxygen species (ROS) levels in the heart. ROS levels have been shown to confer cardiotoxicity by increased apoptosis, inflammation, mitochondrial dysfunction and activation of matrix metalloproteases (31). Molecular nuclear imaging studies have helped shed light on mechanisms of anthracycline mediated cardiotoxicity. Increased ROS levels in an animal model of doxorubicin cardiotoxicity showed that a novel PET tracer, 18 F-labeled radioanalog of dihydroethidium, [ 18 F]-6-(4-((1-(2-fluoroethyl)-1H-1,2,3-triazol-4-yl)methoxy)phenyl)-5methyl-5,6 dihydrophenanthridine-3, 8-diamine ([ 18 F]·DHMT), which targets superoxide, was able to reveal an elevation in superoxide levels in the heart at least 2 weeks prior to a drop in the left ventricular ejection fraction (35). ROS activation of MMPs downstream can then promote adverse cardiac remodeling (101). Renin-angiotensin-aldosterone system (RAAS) activation has been shown to augment the progression of anthracycline induced cardiotoxicity and inhibition via RAAS inhibitors like angiotensin receptor blockers or angiotensin converting enzyme inhibitors have been able to prevent and treat anthracycline mediated cardiomyopathy (102,103). Use of a novel angiotensin receptor-neprilysin Inhibitor, sacubitril/valsartan in a rodent model of anthracycline cardiotoxicity was able to attenuate cardiotoxicity. MMP imaging of activated MMPs using SPECT radiotracer 99m Tc-RP805 showed that sacubitril/valsartan in conjunction with doxorubicin was able to significantly attenuate MMP activation as well as prevent a decline in LVEF compared to doxorubicin alone vs. doxorubicin and valsartan groups. Myocardial MMP activity as assessed by 99m Tc-RP805 uptake was inversely related to left ventricular ejection fraction (31). In addition to MMP activation and adverse remodeling, ROS can also injure endothelial cells. Anthracycline use has been associated with capillary loss in the heart in some rodent models and protection of endothelial cells with vascular endothelial growth factor-B (VEGF-B) treatment led to preservation of capillary mass (104). ROS has also been shown to confer mitochondrial dysfunction. Disruption of mitochondrial membrane potential in mitochondrial dysfunction mediated by anthracycline can be evaluated by 68 Ga-Galmydar. In a rodent model, uptake of 68 Ga-Galmydar was reduced by 2-fold with anthracycline treatment compared to control and in H9c2 rat cardiomyoblasts, this was associated with activation of the apoptosis cascade (36). Early markers of anthracycline cardiotoxicity include an increased uptake of indium-111-labeled antimyosin in the heart, which occurs due to myocyte damage and subsequent association of antimyosin with myosin, which is normally intracellular. Increased uptake of 111 In-antimyosin in patients on anthracycline was associated with LV dysfunction (30). Detection of the earliest stages of apoptosis can also signal early toxicity. Annexin V has a high affinity for phosphatidylserine, which gets exposed on the cell surface during apoptosis. Use of annexin V imaging has allowed for detection of cells undergoing apoptosis. In a rodent model of doxorubicin cardiotoxicity, radiolabeled annexin V, 99m Tc-annexin was used to visualize apoptosis that corresponded to histological evidence of apoptosis on TUNEL staining (33). Finally, sympathetic nervous innervation of the myocardium has also been shown to be disrupted with anthracycline toxicity. An assessment of myocardial sympathetic innervation impairment was done by evaluating a radiotracer that is an analog of norepinephrine, iodine-123-labeled metaiodobenzylguanidine ( 123 I-MIBG). A decrease in 123 I-MIBG uptake with increasing cumulative doses of anthracyclines in human patients was associated with LV dysfunction. However, it takes higher cumulative doses of anthracycline to see a drop in 123 I-MIBG uptake, thus this agent would be less useful if earlier detection of toxicity is desired. However, 123 I-MIBG is clinically available and routinely used to evaluate for adrenaline secreting tumors (30) (Figure 2). CD4, CD8 Imaging in ICI Myocarditis Molecularly targeted radiotracers in nuclear medicine are emerging to evaluate processes such as fibrosis, inflammation and thrombosis, extending beyond nuclear cardiology's traditional use to evaluate perfusion deficits in ischemic heart disease via single photon emission computed tomography (SPECT) and positron emission tomography (PET), tissue viability or inflammation with PET fluorodeoxyglucose (FDG), which evaluates for glucose uptake predominantly by inflammatory cells, such as myeloid and T cells (106). These processes are common adverse effects of oncologic and immunotherapies. Detection of the earliest signs of myocardial inflammation in ICI myocarditis, which occurs in 1-2% of patients on these agents remains a clinical challenge (95,107). The ability to detect the initial infiltration of inflammatory cells such as CD4 or CD8 cells before injury has occurred could help reduce morbidity and high mortality associated with this condition (25). Emerging molecularly targeted probes against CD4, 89 Zr-DFO-CD4 and CD8 cells, 89 Zr-DFO-CD8a may be a potential avenue to detect inflammation at these earliest of stages, which can prompt more frequent follow ups, biomarker checking and earlier therapy (44). Determining specificity of these findings will also be important as to avoid withholding cancer fighting immunotherapy or treatment with steroids, which may potentially lower the efficacy of the immunotherapy agent (108-110). Checkpoint inhibitors have been shown to accelerate atherosclerosis and increase risk of plaque rupture in addition to the risk for myocarditis and pericarditis by driving increased inflammatory cells, including CD8 T cell infiltration into plaques in animal models and patients on checkpoint inhibitors (43,111,112). Thus, evaluation of atherosclerotic lesions with CD8 radiotracers, may be able to identify those at risk for myocardial infarction in patients on checkpoint inhibitor therapy. Detection of Vulnerable Plaque Both checkpoint inhibitor use and certain TKIs like ponatinib and sorafenib have been associated with increased risk of myocardial infarction (43,113). ICIs have also been associated with increased risk of stroke (114). Use of ICIs have been associated with increased infiltration of CD3, CD8 and CD68 cells, markers for T cells and macrophages respectively into atherosclerotic lesions (115). Increased somatostatin receptor 2 (SSTR2) on the cell surface of inflammatory macrophages is a marker of macrophage activation. In a study of symptomatic stroke patients, increased uptake of SSTR2 in culprit vessels assessed by PET tracer 68 Ga-DOTATATE was shown to predict plaque rupture (58). Thus, evaluation of SSRT2 levels in patients on ICI therapy may help identify vulnerable plaques and warrants further investigation. The mechanisms for TKI mediated MI on the other hand are attributed to endothelial cell dysfunction and activation of apoptosis pathways, although direct evidence for MI mechanisms are still lacking, thus further research would be FIGURE 2 | Molecular nuclear imaging elucidates anthracycline cardiotoxicity mechanisms. Anthracyclines can increase ROS levels (which can be assessed by nuclear tracer 18 F-DHMT), which can activate MMPs (which can be assessed by 99m Tc-RP805) (bottom left), leading to adverse cardiac remodeling. ROS levels can also promote mitochondrial dysfunction, which can disrupt the mitochondrial membrane potential and thereby reduce 68 Ga-Galmydar uptake (middle bottom). Mitochondrial damage can lead to apoptosis, which can be detected by Annexin V positivity (detected by 99m Tc-Annexin (bottom right). Damage to cardiomyocytes can lead to release of intracellular myosin, which can thereby be assessed by (105). In-myosin (right of ROS). In addition to ROS increase, anthracyclines can also directly bind and inhibit Topoisomerase II, which can lead to double-stranded DNA breaks (right) and cause further mitochondrial dysfunction and prevent mitochondrial regeneration. Finally, anthracyclines can lead to impaired sympathetic innervation over time for mechanisms that are unclear but is associated with cardiac dysfunction and this can be assessed by 123 needed to see if macrophage activation is involved and whether activated macrophage imaging would help risk stratify patients on these TKIs (113). FAP Imaging in ICI Myocarditis Another potential marker of early stages of ICI myocarditis is fibroblast activating protein (FAP), which is a protein that gets significantly upregulated in cancer tissue, atherosclerosis, arthritis and fibrosis. It is emerging as an imaging marker for fibroblast activation and fibrosis (116,117). A PET radiotracer tracer targeting FAP is 68 Ga-FAPI. In a recent study, 68 Ga-FAPI was shown to be a potential early marker of ICI myocarditis with median standardized uptake values (SUV) 1.79 (IQR 1.62, 1.85) in myocarditis patients vs. 1.15 (IQR 0.955, 1.52) in nonmyocarditis patients (45). FAP has also been used to evaluate post myocardial infarction fibrosis, but its level in the blood vessels and myocardium of patients on checkpoint inhibitors is unclear (118,119). PD1 Imaging as a Potential Risk Factor for ICI Myocarditis Another challenge with checkpoint inhibitor myocarditis is trying to figure out who is at increased risk. Programmed cell death protein 1 (PD1), a target of checkpoint inhibitors like pembrolizumab and its expression on cardiomyocytes warrants additional research as a potential risk factor. PET radiotracer, 64 Cu-DOTA-pembrolizumab can detect PD1 in rodent hearts as well as on the surface of human blood cells and may be used in such an investigation (120). MRI DGE Limitations in Fibrosis Assessment and Collagen Imaging A higher burden of DGE and presumed scarring in hypertrophic cardiomyopathy is associated with worse cardiovascular and death outcomes (121,122). In a retrospective study of ICI myocarditis patients who underwent cMRI, DGE evaluation did not correlate with cardiovascular outcomes, nor fibrosis, with only 35% of pathology proven fibrosis cases showing DGE on MRI (96,121,123,124). Further, of the 56 patients with histopathology available either through biopsy or autopsy, 98% had lymphocytic infiltration but only 38% had DGE and 26% with T2 positivity (96). Thus in addition to evaluation of lymphocytic infiltration with targeted radiotracers for CD4 and CD8 cells to identify early stages of myocarditis and increase sensitivity of diagnosis, late stages of myocardial injury that can result in scar and thus collagen deposition can be evaluated by radiotracers targeting collagen. The PET radiotracer 68 Gacollagelin targets collagen, which can help quantify the burden of scarring or end stage fibrosis, which was shown to be able to detect pulmonary fibrosis in a mouse model of bleomycin induced pulmonary fibrosis and correlated with fibrosis on pathology (46) (Figure 3). MRI with DGE is able to evaluate for possible scarring, but it is not able to distinguish between early vs. late stage fibrosis, with the former having potential reversibility and may partially explain the differential outcomes we see between HCM and ICI myocarditis patients when it comes to the differences in the fibrosis processes between the two conditions and correlation of scar burden as quantified by DGE and outcomes (125). There is also a MRI collagen type I targeted probe EP-3533 that is conjugated to gadolinium, which was shown to be able to visualize pulmonary, liver and bowel fibrosis in rodent models, but these have not yet advanced to use in humans (126)(127)(128). Thrombosis Imaging Pathologic thromboses like pulmonary embolism (PE), deep vein thrombosis (DVT) carries high morbidity and mortality (129). Cancer patients are at increased risk of thrombosis and some of their oncologic therapies can increase that risk further (130,131). ICI, VEGF inhibitors and lenalidomide have been associated with increased thrombosis risk. Increasing the sensitivity of diagnosing blood clots so treatment can be timely instigated may help avoid complications and help improve outcomes (132)(133)(134). Radiotracers that can target fibrin, a molecular precursor of blood clotting can be useful in detection of blood clots. PET radiotracer 64 CU-FBP8 can target fibrin and has been used to identify thrombi in animal models, particularly earlier stages of clots (49). Another PET radiotracer, 18 F-GP1 that targets the glycoprotein IIb/IIIa receptors on activated platelets and has been demonstrated to detect venous thrombosis and arterial thromboses (53,135). A phase 1, first-in-human study of 18 F-GP1 positron emission tomography for imaging acute arterial thrombosis is underway (53). These PET thrombosis imaging agents may be of utility for detection of DVTs and PEs in cancer patients, especially for those who may have contraindications to contrast, such as those with chronic kidney disease or those who have an allergy to contrast. MOLECULAR MRI AND MR SPECTROSCOPY Hyperpolarized MRI for Evaluation of Cardiac Metabolism in vivo As the human heart failures, it has been shown to shift its metabolism from predominantly fatty acid oxidation to more glucose utilization (136). Changes in oxidative phosphorylation or substrate utilization may reflect early signs of cardiotoxicity, yet in vivo real time detection of cardiac metabolism has been limited to small studies with radioactive tracers using PET. More recently, substrate utilization and metabolism have been evaluated using magnetic resonance (MR) imaging and spectroscopy. Hyperpolarized carbon-13 ( 13 C) labeled pyruvate imaging is different from standard clinical MRI using gadolinium contrast, in that it provides information on how tissue uses carbon-based nutrients (37). In rodent models of anthracycline cardiotoxicity, carbon-13 MR spectroscopy (MRS) was used to assess changes to oxidative phosphorylation and tricarboxylic acid (TCA) cycle flux in vivo. These studies showed that doxorubicin lead to reduced cardiac oxidative phosphorylation in a rat model as evidenced by increased 13 C lactate production (38). First in human MRS was used to evaluate tumor metabolism in prostate cancer and ongoing clinical trials are evaluating hyperpolarized MR in tumor metabolism and correlations with outcomes in prostate and pancreatic cancer (137)(138)(139). First use of hyperpolarized 13 C metabolic MRI in human heart involved evaluation of pyruvate metabolism in healthy individuals (39). Hyperpolarized MR imaging may allow for visualization of changes in cardiac energetics, particularly from fatty acid metabolism to more glucose utilization in an evolving cardiomyopathy in response to cardiotoxic chemotherapy and to evaluate response to cardioprotective medications such as beta blockers and angiotensin converting enzyme inhibitors in real time (140). Apoptosis Evaluation by MRI Various chemotherapy agents, most notably anthracyclines are known to increase cardiomyocyte apoptosis. Molecular MRI probes conjugated to superparamagnetic iron oxide (SPIO) and human annexin was shown to be able to visualize apoptosis in real time in a rodent model following ischemia and post doxorubicin exposure, but these MRI molecular probes have not gone beyond animal studies thus far but have the potential to detect early signs of cell death in the myocardium (105,141). Inflammation Imaging by MRI In addition to T1, ECV and T2 signal changes, use of ultrasmall superparamagnetic particles of iron oxide (USPIOs) in MRI may confer insights on inflammation via increased macrophage activity. USPIOs have been shown to be taken up by macrophages and correlates with plaque inflammation in animal studies (142). In a study of patients with severe carotid stenosis, uptake of USPIOs corresponded to inflamed plaques on histology. Uptake of USPIOs induced areas of signal loss on T * 2 -weighted magnetic resonance imaging within the vessel wall. Whether this can help predict plaque vulnerability in those on checkpoint inhibitors or help identify ICI myocarditis is untested and warrants further investigation (143). However, this has been used clinically and may have potential to distinguish vulnerable plaque from less vulnerable plaque. Barriers to Advancing Molecular Imaging For the molecular imaging tracers that are already clinically used, barriers to use include radiation exposure, so deciding who should get the test, when to get it and how often will have to be established. For example, if FAP is associated with ICI myocarditis as a potential early marker, then perhaps it should be obtained when there is suspicion for myocarditis or when troponin becomes positive. Timed with evaluation of this marker for residual disease, it can also help with monitoring of resolution FIGURE 3 | Imaging modalities in the evaluation of immunotherapy related cardiotoxicities. Imaging modalities that can be used to monitor myocardial inflammation due to immunotherapy include: MRI (top) using tissue characterization assessments such as T2, T1/ECV, delayed gadolinium enhancement (DGE) and cine to evaluate wall motion and function; Nuclear Imaging (middle) approaches involving molecularly targeted probes conjugated to radiotracers facilitating evaluation of CD4 cells with 89 Zr-DFO-CD4, CD8 cells with 89 Zr-DFO-CD8, early signs of fibrosis with fibroblast activation protein (FAP), expression of PD1 on cardiomyocytes, which can be seen with 64 Cu-DOTA-pembrolizumab and may reflect increased risk of checkpoint inhibitor myocarditis, FDG that allows for monitoring of inflammation, and the final stages of inflammation with tissue damage and fibrosis and scar deposition assessed with collagen imaging with 68 Ga collagelin; Echocardiography (bottom) is able to evaluate regional and global strain to detect signs of chemotherapy related toxicity and myocarditis. of myocarditis, potentially complementing cardiac MRI or taking place of MRI for those who cannot tolerate MRI, which is usually used for monitoring. Access is another challenge. Access to molecular nuclear studies are often available through large hospital systems and for agents with shorter radioisotope halflives like Gallium-68 ( 68 Ga) with average half-life of 68 min, an onsite germanium-68/gallium-68 generator is needed along with accompanying nuclear accreditation, thus, more rural hospitals or private practices may have to refer out to larger centers in order to obtain these tests at high volume imaging centers (144). Finally, nuclear studies tend to be more expensive than echo and either on par or more expensive than MRI studies due to the costs associated with radiolabeled probes, thus being able to get these studies approved can also be a challenge for providers even if it is clinically used and indicated. For the molecular tracers that are in the preclinical stage, the usual barriers exist for clinical translation, including establishing safety, a favorable target to noise ratio in humans and correlation with outcomes to achieve FDA approval and ultimately clinical use. For those radiotracers that are already in clinical use for oncology indications, such as FAP, CD4, CD8 and PD1, incidental detection in the heart and correlation with outcomes is possible and can be further explored for future dedicated cardiac imaging and may provide unique clinical value. The power of machine learning, artificial intelligence and big data in evaluation of imaging signals can help unlock patterns that humans may not readily be able to see, such as in a recent evaluation of cardiac fibrosis by T1 imaging by MRI and be able to correlate these imaging findings with outcomes (145). Overview of Current AI Applications in Cardio-Oncology Artificial intelligence (AI), through the training of machine and deep learning models, has shown remarkable potential in the prevention and diagnosis of cancer therapeutics-related FIGURE 4 | Applications of artificial intelligence, big data in cardio-oncology. Artificial intelligence (AI) can improve our understanding of the early molecular and phenotypic changes that occur prior to the development of clinical cancer therapeutics-related cardiac dysfunction. Machine learning approaches enable high-throughput screening of novel therapeutics using preclinical models, such as induced pluripotent stem cells as well as in silico simulations using libraries of drugs and molecular targets. In the clinical setting, AI can improve risk prediction of left ventricular dysfunction, arrhythmias as well as facilitate accurate and standardized assessment of chamber size, function and coronary calcification, all hallmarks of cardiovascular disease that can be caused or exacerbated by cancer therapeutics. Therefore, AI offers an opportunity for early diagnosis and deployment of strategies to prevent the progression to overt cardiovascular disease. Images have been reproduced under a Creative Commons Attribution 3.0 Unported License from smart.servier.com. CAD, coronary artery disease; CT, computed tomography; ECG, electrocardiography; hiPSC, human induced pluripotent stem cell; LV, left ventricular; MRI, magnetic resonance imaging; SPECT, single photon emission computed tomography. cardiac dysfunction (CTRCD). With applications across all stages of the natural history of CTRCD, AI can assist scientists and physicians in screening for molecular interactions between novel therapeutic agents and the cardiovascular system, as well as detecting subclinical cardiovascular effects prior to the development of overt clinical disease (Figure 4). At the pre-clinical stage, AI techniques have been used for high-throughput screening of cancer agents using a variety of disease models. These range from human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) exposed to antineoplastic agents, screening of drug libraries to detect agents that interact with channel proteins resulting in QT prolongation, all the way to exome sequencing to identify variants in cardiac injury pathway genes that protect against anthracycline-induced cardiotoxicity and dual transcriptomic and molecular machine learning to predict different types of cardiotoxic response (146)(147)(148)(149)(150). Such approaches can de-risk early-stage drug discovery but also contribute to post-marketing surveillance to maximize patient safety. On the same note, pharmacovigilance in cardiooncology can be assisted by machine learning-guided monitoring of electronic health records that includes patient demographics, echocardiography, laboratory values to detect signals suggestive of increased cardiac risk with specific therapies or practices (151,152). For therapies that form the mainstay of cancer therapy, ranging from chemotherapy to immunotherapy and radiation therapy, active surveillance protocols have been proposed and implemented, particularly for therapies with known cardiotoxic effects, such as anthracyclines and HER-2/neu inhibitors. Here, non-invasive cardiac imaging (by means of transthoracic echocardiography and/or magnetic resonance imaging (MRI)) and electrocardiography (ECG) represent the modalities of choice in the screening of conditions, such as anthracycline-induced cardiotoxicity and immune checkpointinduced myocarditis (78,153). Whereas AI applications in cardiovascular imaging have traditionally been developed in the general population, shared phenotypes seen in both CTRCD and non-cancer-related cardiac dysfunction, may extend the use of these technologies to cardio-oncology. An expanding body of research has in fact demonstrated the ability of deep learning-enhanced interpretation of ECG in screening for and improving the diagnosis of left ventricular dysfunction, essentially functioning as a gatekeeper to the use of more advanced imaging modalities (154). It is notable that this tool was tested in a randomized controlled trial and demonstrated effectiveness in increasing the early diagnosis of decreased left ventricular ejection fraction (LVEF) without an increase in the use of echocardiography (155). Similarly, AIguided ECG assessment can also predict the future incidence of atrial fibrillation (156). In childhood cancer survivors, machine learning algorithms of baseline and follow up ECGs were able to predict future cardiomyopathy (157). However, whether these results generalize to cardio-oncology, such as in the monitoring of anthracycline or Herceptin mediated cardiotoxicity, or ibrutinib-associated atrial fibrillation remains unknown and should be explored in future studies (158,159). AI has contributed to a more efficient and standardized interpretation of several non-invasive cardiovascular imaging modalities. For instance, in the field of transthoracic echocardiography, deep learning video-based models now enables fast and automated calculation of LVEF, with variance that is comparable to that or even lower of a human observer (160,161). Similarly, combined assessment of ECG-and echocardiography-derived AI models has shown good discrimination in detecting cardiac amyloidosis, a rare disorder that is however more prevalent among patients with cancer compared to the general population (162). Similar approaches can be found in the field of computed tomography (CT) imaging, where automated tools enable an accurate assessment of coronary artery calcium burden, which can be generalized to both gated and non-gated CT scans of the chest, with the latter often used in the staging or monitoring of patients (163,164). Therefore, such tools may refine a patient's baseline cardiovascular risk and inform risk-benefit discussions about the deployment of potentially cardiotoxic therapies. Finally, automated chamber size quantification, tissue characterization parameters such as T1, T2, extracellular volume and functional indices that can be extracted from cardiac MRI images can have the ability to confer insights into cardiotoxicity including the potential to identify early to late cardiotoxicity mediated by chemotherapy or immunotherapy agents via detection of changes in chamber size, abnormal T1, T2 relaxation times and delayed gadolinium enhancement patterns (86,95,96,99,145,(165)(166)(167). Deep learning models have also shown promise in the standardized interpretation of functional nuclear modalities, such as SPECT (single photon emissions computed tomography) myocardial perfusion imaging with good discrimination for the presence of obstructive coronary artery disease (168). However, as these tools become clinically available, prospective validation and possibly recalibration specifically in patients with cancer will be required to ensure their validity and generalizability. Strengths and Weaknesses of Current Methods and Barriers for Clinical Translation To better understand the strengths and weaknesses of AI applications in cardio-oncology, one first needs to review key definitions. AI refers to the ability of an automated system to perform tasks that are typically characteristic of human intelligence, such as image and pattern recognition, as well as prediction and classification. Machine learning describes the process by which a system gains the ability to perform such tasks. This learning process can be further divided into supervised and unsupervised learning. The former describes the analysis of labeled datasets with the goal of predicting the label of a given datapoint based on a set of independent predictors. The latter refers to the analysis of unlabeled and unclassified datasets where the algorithm attempts to discover patterns within the data on its own. Algorithms may range from traditional regression models to deep neural networks, consisting of multiple layers of neurons and nodes which operate in a manner similar to the human brain (169,170). However, independent of the algorithm used, machine learning systems rely on high-quality input to deliver high-quality output. This is where "big data" become relevant, describing the need for datasets that are large enough to ensure adequate variance, remain representative of their original and target populations, enable time-efficient analyses and have been carefully rather opportunistically curated to address a specific question (171). With those key concepts in mind, some of the limitations of machine learning applications in cardio-oncology become apparent. First, cardiovascular disease is often listed as an exclusion criterion in major cancer trials, thus resulting in underrepresentation of patients with cardiovascular disease in pivotal cancer trials (172). However, the inclusion of cardiovascular outcomes in cancer trials will be able to help fill this data gap if sufficient baseline and follow up data are acquired (molecular biomarkers, baseline imaging prior to oncologic therapy and follow up that can be used as input). Second, while AI systems can learn patterns in the data, explaining what drives those predictions or establishing causal inference is not a straightforward task (173). Moreover, cancer is a highly heterogeneous condition with multiple molecular, histological, and clinical subtypes that often respond differently to the same therapies (174). Therefore, ensuring generalizability of models across different cancer subtypes, treatments and patient populations may be an insurmountable task without access to vast amounts of accurately labeled data. Third, there is often significant delay in the timing between data collection, model training and the final model deployment. As a result, AI models are often outdated when deployed for clinical use, thus highlighting the need for more efficient pathways that would enable real-time updates. Finally, AI models are bias-prone often reproducing biases that are inherently present in the datasets used for training. Ensuring representation of diverse patient populations is of paramount importance to promote an equitable impact of AI in healthcare delivery and outcomes (175). Future Applications of AI in Cardio-Oncology and Molecular Imaging With careful consideration of these limitations, AI has the potential to advance cardio-oncology in many different directions. Radiomic applications, which extract several metrics based on the shape, dimensions, signal density and spatial interrelationship of voxel signals in a given tissue, have been found to be superior to conventional readouts in reflecting tissue composition, as well as metabolic or inflammatory activity (176)(177)(178). In fact, some of the most exciting applications of AI lie beyond structural imaging in molecular imaging. In the recent past, deep learning and generative adversarial networks have successfully reconstructed PET images directly from raw sinogram data effectively maximizing image quality (179,180). In other applications, AI tools have generated full-dose PET images from low-dose images, thus maximizing signal-to-noise ratio at lower radiation levels (181,182). In another example, convolutional neural networks have enabled the development of cMRI virtual native imaging technologies which generate late gadolinium enhancement-like images in an accurate and reproducible manner without the need for contrast administration (183). Though originally developed in patients with hypertrophic cardiomyopathy, this technology may be of value in cardio-oncology and the monitoring of ICI-myocarditis. Further, for molecular imaging targeting biomarkers like FAP and PD1, these are already used clinically in oncology to monitor for residual disease and assess response to immunotherapy respectively, thus if the heart is captured in existing data sets, AI/ML can help to predict whether the presence of these markers are associated with adverse cardiovascular outcomes. Coupled with improvements in the speed and accuracy of segmentation algorithms, AI can accelerate the clinical deployment of molecular imaging approaches in the timely detection of cardiovascular toxicity (184). CONCLUSIONS Imaging advances, particularly molecularly targeted imaging modalities may help detect cardiotoxicities at the earliest stages with greater specificity, shed light on mechanism as well as response to cardioprotective medications such as beta blockers, angiotensin converting enzyme inhibitors, etc. Newer MRI metabolic evaluation techniques such as hyperpolarized MRI may allow for a non-invasive approach to evaluate cardiac metabolism in real time. To complement imaging studies, use of AI and big data on imaging parameters and forthcoming molecular imaging datasets, in addition to patient demographics may help predict or detect cardiovascular toxicities at their earliest stages. Inclusion of diverse patient cohorts as well as cardiovascular parameters/biomarkers and imaging in cancer trials can enable AI/Ml to increase accurate categorization as well prediction models in cardio-oncology patients. Additional research in these areas and advancing animal studies toward human studies may further help improve cardiovascular outcomes in cancer patients. AUTHOR CONTRIBUTIONS JK led the development of the manuscript, writing, and generation of figures. EO contributed to writing and generation of figures. MH contributed to writing, assisting with editing and organizing of the manuscript. AJS oversaw the writing, editing, and review of the manuscript. All authors contributed to the article and approved the submitted version.
9,608.2
2022-03-15T00:00:00.000
[ "Medicine", "Computer Science", "Engineering" ]
Development of Misorientation in FCC Single Crystals Under Compression at Different Scales This article presents analysis of FCC single crystals local areas reorientation under compression. The reorientation process was examined at different scales, from sample size scale to that of dislocation substructure. It has been found that disorientation in meso and macro levels is determined by accumulation of misorientation at the level of dislocation subsystem. Our research allows quantifying of accumulated misorientation magnitude. The results of this study illustrate interrelation of rotational and translational deformation modes both at the same scale level and various scale levels. Introduction During plastic deformation, there are two constantly interacting processes that specify translational and rotational components of plastic deformation. At early stages of experimental investigation of plastic deformation phenomenon, only one process was observed and described because of imperfections of research methods and facilities [e.g. 1,2]. X-ray analysis made it possible to monitor crystals' reorientation, and development of optical and electron microscopy -to observe glide dislocations. It has been established that both processes are closely related, but the reorientation is the most characteristic for large-scale plastic deformations and higher dislocation densities [3][4][5]. While deformation starting, changes in a defect-free crystal are performed because of dislocation glide. However, in some cases, the reorientation process can be initiated at an early stage of deformation. This phenomenon is known as twinning mechanism, which occurs if a certain crystallographic orientation, low stacking fault energy and temperature are involved. All facts aforesaid refer to the entire range of crystalline materials, which are investigated not only by physics and chemists, but also geologists and other scientists and engineers. This approach is applicable to the study of deformation processes in single crystals with FCC lattice. Despite its apparent simplicity (no grain boundaries) single crystals are sufficiently complex objects because of anisotropy properties and aspects of symmetry. Therefore, the plastic deformation is traditionally considered at different scale levels; that enables to explore certain aspects of the phenomenon. Experimental results and discussion Let's begin examination of misorientation in single crystals with laws' descriptions starting with a macroscopic level. Crystallographic orientation changes are closely related to samples' shape changes while plastic deformation. The latter depends on the strain pattern and on work-piece shape to which deformation is applied. Laboratory experiments enabled us to determine a load pattern (compressive strain) and a sample form (tetragonal prism). On the one hand, factor of shape's effect on compressive strength was identified and eliminated (sample is stable with the ratio of height to width equal to two). On the other hand, deformation with friction is complicated by difference in stress condition patterns in the end faces (uneven hydrostatic compression) and the central part of the sample (uniaxial compression). Let's examine the results of experiments that were performed on samples -tetragonal prisms, subjected to compression with mechanical friction. Gliding crystallography was examined as if developing according to the systems {111}<110>. Single crystals were oriented relative to compression axis along corner lines of a standard stereographic triangle with a set of various side faces. Previously, the authors performed systematization of strain relief structural elements, depending on crystallographic orientation of compression axes and side edges. Proportions of strain relief structural elements were determined; that clarified how zones of these elements, at macro-and meso-level, were involved in plastic deformation of investigated single crystals [6]. Magnitude of plastic deformation heterogeneity of [111] nickel single crystals was determined experimentally for examined crystallographic orientations of various side faces at various scales, considering strain relief formation and compressive stress distribution [7]. It was found that localization zones do not occur within deformation domains. Development of inner structures occurs in such a way to reduce the strain heterogeneity and approximate an average deformation value to that of local deformation. Interface areas of neighboring deformation domains and end-face deformation areas are deformation localizations areas. The domain interface area has different glide systems; in end-face areas deformation is increased because of material transfer while mechanical friction. The role of stress distribution nature in plastic deformation inhomogeneity is shown in [6,7]. We consider single crystals with a ratio of height to width equal to two. Samples with such an aspect ratio show the greatest stability under compression in the case of isotropic materials. In the case of single crystals shear anisotropy has its own individual impact. As part of the problem statement, let's examine how compression axes' and lateral faces' crystallographic orientation affect sample stability while being compressed. Various crystallographic orientations of compression axis along the long sample axis determine various resistances to deformation, depending on orientation of closepacked planes and shear directions in the volume [6]. It's been found that the greatest stability of the sample under compression is achieved when the sample symmetry axis, relative to the compression axis, coincides with gliding systems arrangement symmetry, relative to this axis. This feature depends on the crystallographic orientation of the side faces. Besides orientation of lateral sides affects development of single crystal volume deformation and causes its discontinuity. The most detailed investigations of deformation heterogeneity were carried out on nickel [7]. It's been found that at the macro level in single crystals with compression axis [ 11 1 ] deformation occurs more inhomogeneous for samples -tetragonal prisms. In this case, deformation axis orientation significantly reduces possibility of symmetric shear relative to side faces and macro-stripes forming large deformation domains. Intense folding deformation occurs in neighboring domains. Development of compressing deformation contributes to changing in single crystals' [ 11 1 ] orientation; that is expressed at the macro level in sample shape curving and crystallographic reorientation of its parts. Studies have shown that the sample body is divided several reoriented fragments [8]. A lattice orientation is less changed in the central fragment. In adjacent fragments reorientation is more pronounced. Crystal lattice orientation [ 10 1 ] is heavily biased. The lattice rotation occurs about the axis [110]. In end-face fragments changes of crystal lattice orientation are not observed. With further increase of strain (22%), reorientation areas' macro-fragmentation is not changed (Fig. 1). Fragments' reorientation degree increases uniformly. Crystal lattice orientation remains practically unchanged in end-face fragments. Central fragment's lattice reorients in the direction [101] relative to the deformation axis X. It's been noted that the largest reorientation areas are observed relative to the compression axis X, and almost are not observed relative to the axis Z. Such nature of reorientation in different areas of the single crystal is conditioned by several reasons. Original orientation in end-faces is permanent, if the scheme is implemented non-uniform compression due to friction, which hinders the development of deformation processes in these areas. deformation. This is caused by force-moment effect on a selected body, as values of stress tensor components vary in neighboring areas of this body. Reorientation is lagged in the central fragment. Reorientation of areas within single crystals of nickel, copper and aluminum occurs likewise. Trusov's investigations [9][10][11] show that non-uniform stress field activates various sets of glide systems. In a local area, only one possible glide system operates in a certain time. Meanwhile, according to Taylor criterion, crystal body deformation can occur only under effect of five independent glide systems. In practice, usually, effects of more than three or four glide systems in local material volumes are not observed. In general, this is insufficient for any deformation. Therefore, this disadvantage should be compensated by reorientation of crystal areas. Based on this approach, the authors of [12] proposed a crystalline solid flow model, where disadvantage of active glide systems is complemented by rotary modes. We observe the same phenomenon in the above experiment on copper and nickel. Compressive strain is not effective while deforming by six still loaded with shear systems in FCC single crystals with orientation of compression axis [ 11 1 ] [7]. In such crystals deformation has a number of distinctive features. First, deformation occurs by shear with formation of macroscopic deformation bands systems. Second, the deformation bands systems are localized in certain areas of the crystal, which do not provide its complete deformation. Third, number and location of still loaded with shear systems do not provide orientation stability while deformation. Because of this, the single crystal is divided for disoriented fragments. Primarily, a single crystals central area reorientation occurs. Reorientation in this body starts a mechanism of rotary deformation and also involves additional shear systems, being not active before. This is an illustrative interrelationship of translational and rotational modes of deformation on the same scale level. Experiments prove that deformation domains should be sorted as translational and rotational ones [13]. Boundaries between of translational and rotational domains differ from grain boundaries of a polycrystalline sample. The difference is in the length and width of the boundaries, as well as in continuous transfer from one strain domain to another. In [14] authors identify these boundaries as a separate element of structural deformation with their geometric and structural parameters. Depending on domain orientation relative to applied stress, displacement deformation vectors have multidirectional or reverse directions in adjacent domains. Prevailing shearing orientation in the single crystal is always directed towards free side faces. Areas of different deformation domains are clearly observed in the picture of displacement vectors' fields [14]. Comparison of deformation domain areas and spatial distribution of strain tensor components (shear and rotary) indicates an increase of all tensor components at the boundaries of deformation domains. This proves that increase of shear strain provides increase of rotational component. Thus, resulting deformation is higher at domains' boundary, if compared with that within a separate domain. Data obtained with EBSD-analysis indicate accumulation of misorientation in domain boundaries. Our studies have found that misorientation increases when approaching the deformation domain boundary and significantly increases in the boundary. Accumulation of misorientation in deformation domain depends on deformation mechanism at a meso-level. Previous studies [6] show that, depending upon single crystals' orientation, domain deformation occurs as a shearing along parallel shear planes with formation of slip bands in the form of meso-or macroscopic deformation bands. If moderate deformation degree, macroscopic deformation bands are shearing and do not lead to reorientation of mesoareas within the domain. With increase of strain degree, accumulation of dislocations' like-sign over-density occurs in local areas of the crystal, which contributes to development of misorientation. In addition, in the surface area, misorientation effect is manifested as deformation folds. Fig. 2 a-b represents the area of folds formation. EBSD-analysis is carried out in cross-section. It allows you to explore the disorientation with distance from the surface. In this case we observe crystal local areas' reorientation in places of folds' forming. Geometric image of boundaries (Fig. 2a) illustrates distribution and magnitude of disorientation boundaries in the areas occupied by various types of structural relief elements. Color corresponds to value of misorientation angles (Fig. 2b). Here it is necessary to make a certain topical excursus. The boundaries are drawn with the program, which implies a certain disorientation interpolation on a segment, perpendicular to the boundary line. Thus, the boundaries, interpreted with the program, are diffused in a certain domain, determined by the area of interpolation. We can see that, for a given degree of deformation, disorientation angles do not exceed 5°...10°. There is disorientation magnitude to 5 ° in reoriented stripes; and larger angles along their boundaries (Fig. 2d). Consequently, folding contributes to formation of new boundaries within the single crystal. With use of EBSD-analysis observed disorientations were compared to disorientations of dislocation subsystem's level. EBSD record was conducted with a certain minimum grade value of 1 pixel = 5.5 microns. So, if disorientation boundary, after processing, goes along the boundary between adjacent pixels, then disorientation is at a distance of 5.5 microns. Nickel dislocation structure has the form of dislocation cells (Fig. 3a) in the initial degree of deformation. While increase of strain, dislocation over-density is accumulated in the boundaries of dislocation cells as is evidenced by change in contrast between adjacent cells or groups of cells (Fig. 3). Contrast changes usually become apparent when disorientations' magnitudes are 0.5. This can be verified either with electron microscope analysis or direct measuring with the goniometer, locating the tilt angle at a certain orientation of the tilt axis in the foil plane. Size of the dislocation cell is 0.5 microns at the deformation degree being examined [15]. That is, 11 dislocation cells are placed in one pixel, which give the accumulated disorientation 5.5 observed in the experiment. Thus, EBSD-defined misorientation magnitude agrees with that of disorientation, accumulated during deformation in cellular dislocation substructure. Over-density accumulation varies in cell walls of finite thickness. Depending on like-sign dislocations' distribution at the boundary, it has various magnitudes of disorientation while transitioning across the boundary; and transforms sequentially into blocked, fragmented and then subgrain structure. Volumes, bounded with disorientations, constitute structure elements which can be subjected to reorientation under certain conditions. In fact, it occurs, in particular, under conditions of material super-plastic flow, when the grain boundary gliding or microporosity formation facilitates rotation of grains in the boundary. At meso-level, individual rotational defects of very significant magnitudes can be observed, mainly those of disclination type. There are many studies that interpret phenomena in the dislocation structure as manifestations of disclination formations. By now, findings of investigations, obtained while experimenting with metallic materials, are incomplete. It is our belief that a disclination loop, experimentally obtained (by authors), is rather valuable; the loop was formed in iron-nickel ordered alloy with compression axis orientation [001] [16]. Magnitude of azimuthal disorientation, determined by splitting of reflexes on an electron diffraction pattern, is 10 °. In this case, disclination loop formation facilitated "knife-edge" shear boundary formation in the rest of the crystal volume, which resulted in a shift of dislocation cell wall by 0.2 ... 0.4 micron. Earlier authors classified this phenomenon as a mechanism for destruction of stable cellular structure of the ordered alloy. The experimental results also illustrate interrelation of translational and rotational deformation modes at the level of mesodefects of the dislocation-disclination subsystem. Deformation mechanism is a parallel action of two "knife-like" edges, which share cell boundaries in opposite directions. After the 0.5 μm 0.5 5.5 strong shear ceases, within the crystal a large-scale disclination loop is formed; which has a large misorientation angle between loop interior part and surrounding matrix. Conclusions Thus, analysis of scientific papers and authors' own results show interrelation of disorientation processes at different scale levels. Accumulation of misorientation at dislocation subsystem's level (lower scale level) leads to accumulation of misorientation at meso-and macro-levels. Our research allows quantifying of accumulated misorientation magnitude and monitoring of crystallographic orientation changes in deformation elements at all scales. Furthermore, the results of this study illustrate interrelation of rotational and translational deformation modes both at the same scale level and various scale levels.
3,427.2
2016-08-01T00:00:00.000
[ "Materials Science" ]
Transcriptomic analysis of genetically defined autism candidate genes reveals common mechanisms of action Background Austism spectrum disorder (ASD) is a heterogeneous behavioral disorder or condition characterized by severe impairment of social engagement and the presence of repetitive activities. The molecular etiology of ASD is still largely unknown despite a strong genetic component. Part of the difficulty in turning genetics into disease mechanisms and potentially new therapeutics is the sheer number and diversity of the genes that have been associated with ASD and ASD symptoms. The goal of this work is to use shRNA-generated models of genetic defects proposed as causative for ASD to identify the common pathways that might explain how they produce a core clinical disability. Methods Transcript levels of Mecp2, Mef2a, Mef2d, Fmr1, Nlgn1, Nlgn3, Pten, and Shank3 were knocked-down in mouse primary neuron cultures using shRNA constructs. Whole genome expression analysis was conducted for each of the knockdown cultures as well as a mock-transduced culture and a culture exposed to a lentivirus expressing an anti-luciferase shRNA. Gene set enrichment and a causal reasoning engine was employed to identify pathway level perturbations generated by the transcript knockdown. Results Quantification of the shRNA targets confirmed the successful knockdown at the transcript and protein levels of at least 75% for each of the genes. After subtracting out potential artifacts caused by viral infection, gene set enrichment and causal reasoning engine analysis showed that a significant number of gene expression changes mapped to pathways associated with neurogenesis, long-term potentiation, and synaptic activity. Conclusions This work demonstrates that despite the complex genetic nature of ASD, there are common molecular mechanisms that connect many of the best established autism candidate genes. By identifying the key regulatory checkpoints in the interlinking transcriptional networks underlying autism, we are better able to discover the ideal points of intervention that provide the broadest efficacy across the diverse population of autism patients. Methods: Transcript levels of Mecp2, Mef2a, Mef2d, Fmr1, Nlgn1, Nlgn3, Pten, and Shank3 were knocked-down in mouse primary neuron cultures using shRNA constructs. Whole genome expression analysis was conducted for each of the knockdown cultures as well as a mock-transduced culture and a culture exposed to a lentivirus expressing an anti-luciferase shRNA. Gene set enrichment and a causal reasoning engine was employed to identify pathway level perturbations generated by the transcript knockdown. Results: Quantification of the shRNA targets confirmed the successful knockdown at the transcript and protein levels of at least 75% for each of the genes. After subtracting out potential artifacts caused by viral infection, gene set enrichment and causal reasoning engine analysis showed that a significant number of gene expression changes mapped to pathways associated with neurogenesis, long-term potentiation, and synaptic activity. Conclusions: This work demonstrates that despite the complex genetic nature of ASD, there are common molecular mechanisms that connect many of the best established autism candidate genes. By identifying the key regulatory checkpoints in the interlinking transcriptional networks underlying autism, we are better able to discover the ideal points of intervention that provide the broadest efficacy across the diverse population of autism patients. Background Autism spectrum disorder (ASD) is a heterogeneous developmental disease that is primarily characterized by behavioral and social impairments such as the presence of repetitive or ritualistic activities, social withdrawal, and difficulty with proper communication. ASD is more commonly diagnosed in male individuals at a 4:1 ratio and its incidence has notably risen over time. It is currently estimated that ASD afflicts up to one out of every eighty-eight individuals and is now counted as the second most common developmental disability after intellectual disability [1][2][3]. Current treatment options for autism are limited, focusing primarily on behavioral therapies and repurposed drugs whose primary indication is not autism. It is long been appreciated that ASD has a strong genetic component underlying its etiology. Early twin studies, examining the co-inheritance of ASD among monozygotic twins, reported a heritability rate for ASD between 60% and 90% [4]. The role of genetics in ASD has been further elucidated and refined at the single gene level as tools such as genome-wide association studies (GWAS), copy number variant (CNV) mapping, and whole exome/genome sequencing have been applied to the disease [5][6][7][8][9][10]. A clear association has been demonstrated between genetic variants in genes, such as Contactin-associated protein-like 2 (Cntnap2) and Semaphorin-5A (Sema5A), and ASD, and the localization of rare deletions and duplications has not only led to the identification of new autism candidate genes, such as SH3 and multiple ankyrin repeat domains 3 (Shank3), but also the creation of new mouse models that parallel ASD at both the genetic and behavioral level [11][12][13][14]. Our understanding of the genetics and molecular mechanisms of ASD has also been greatly enriched by the study of rare diseases caused by mutations in a well-defined single gene with symptomatic overlap with ASD. Two of the best known examples of this are Fragile X and Rett syndromes. Fragile X is caused by an expansion of a CGG repeat in the Fragile X mental retardation-1 (Fmr1) gene and results in mental retardation. Fragile X, because it is X-linked, is preferentially found in male individuals and 25% to 33% of Fragile X patients also meet the criteria for ASD, making it one of the most common genetic causes of autism [15]. Rett Syndrome is also X-linked but unlike Fragile X and ASD, it is predominantly diagnosed in female individuals, because the hemizygous state is often lethal. Rett syndrome too is marked by mental retardation and frequent comorbidity with autism. In addition to being directly tied to ASD through Rett, Methyl-CpG binding protein 2 (Mecp2), a transcription factor mutated in Rett, regulates the expression of other genes that have been tied to ASD, including Brain-derived neurotrophic factor (Bdnf) [16,17]. Through the use of modern genetic methods and the study of syndromic forms of autism, over 200 genes have been associated with ASD [18]. In an attempt to gain a better understanding of molecular pathophysiology of the disease, tools such as pathway analysis [19] and protein-protein interaction networks [20][21][22] have been deployed to identify common mechanisms among these autism-risk genes, and one of the dominant themes that has emerged is a convergence on synapse integrity and dendritic spine formation [23][24][25]. Phosphatase and tensin homolog (Pten), the causative gene for Cowden syndrome -another syndromic form of autism -is shown to cause increased neuronal spine density, dysfunction in excitatory and inhibitory synaptic activity and decreased synaptic plasticity when deleted [26][27][28]. Shank3 encodes a synaptic scaffolding protein while Neuroligin 1 and 3 (Nlgn1, Nlgn3) produce synaptic cellular adhesion molecules. All three genes have been shown to be altered in ASD patients [29][30][31]. Finally, Myocyte enhancer factor 2A and 2D (Mef2A, Mef2D) are activity-dependent genes that encode transcription factors regulating multiple additional genes implicated in ASD (Ube3A, Slc9A6, Pcdh10, and C3orf58) [32], and knockdown of these genes in primary neurons has been shown to regulate synapse density [33]. Despite the clues that have been provided by these genetic links, a true understanding of how those genetic defects translate into altered biology have continued to be elusive and therefore have made the development of new therapies for ASD difficult. The current gross appreciation of impacted dendritic spines and synaptic health falls short of the digital visualization of the molecular mechanisms of ASD necessary to advance the field. Therefore, in this study, we sought to determine the molecular consequences of the loss of function of these diverse genes that have been genetically implicated in autism by use of an in vitro model system. Primary neuronal cultures are a well-established model for studying fundamental synaptic biology with a well-characterized trajectory of synaptic differentiation and function [5,34]. These cultures have proven to be a robust system for characterizing the transcriptional consequences of synaptic modulation under a number of settings [32,35,36]. We have focused on cortex as a tissue of origin based on observation of pathologic changes in post-mortem ASD cortex [8] and prior work studying ASD-relevant gene function in cortical neurons [10]. By knocking down Mecp2, Mef2a, Mef2d, Fmr1, Nlgn1, Nlgn3, Pten, and Shank3 (Table 1) in murine primary cortical neurons, we were able to compare and contrast the varying transcriptional profiles of each transcriptional inhibition to arrive at core signaling pathways that unite this otherwise disparate group. Pathways that are in common between the various candidate genes would provide one potential explanation of how a mutation in each them might produce the same clinical outcome -ASD. As all of these genes play a role relevant for synaptic structure or function, the hypothesis was that common downstream genes and pathways might be perturbed. For a disorder with heterogeneous genetic backgrounds that produce common behavioral phenotypes, a common molecular pathway could provide a new avenue for therapeutic intervention. Lentiviral shRNA construct generation and production Lentiviral constructs were generated by cloning annealed and kinased, complementary oligonucleotides into the lentiviral vector pLL3.7_H1 (a version of pLL3.7, Patrick Stern-MIT, modified to encode human H1 promoter to drive short-hairpin (sh)RNA expression). Individual gene's target sense sequence ( Table 2) followed by the loop sequence TTCAAGAGA, targets' corresponding antisense and TTTTTT terminator sequences oligos were ligated (New England Biolabs, Ipswich, MA, USA) into the BamHI (5′) and XhoI (3′) cloning sites downstream of the human H1 promoter into pLL3.7_H1. Lentivirus was produced per manufacturer's instructions via quadruple co-transfection of shRNA-containing pLL3.7_H1 plasmid along with the 3 plasmid ViraPower (Life Technologies, Grand Island, NY, USA) system into HEK293T cells. Then, 24 hours post transfection, the media were changed to complete neurobasal media (Life Technologies) and lentivirus-conditioned media were harvested 48 hours later. Functional titer was determined based on green fluorescent protein (GFP) co-expression in HEK293T cells using flow cytometry (FACscalibur, Becton Dickenson, Franklin Lakes, NJ, USA). Optimal lentiviral transduction of primary cultured cortical neurons was determined to be a multiplicity of infection (MOI) of 3.0, based on fluorescence. Primary neuronal cultures and transductions Mouse primary neuronal cultures were prepared from day-16 C57BL6/J embryos. All procedures related to animal care and treatment were conducted under a protocol approved by the Pfizer Institutional Animal Care and Use Committee, according to the guidelines of the National Research Council Institute for Laboratory Animal Research Guide for the Care and Use of Laboratory Animals and the US Department of Agriculture Animal Welfare Act and Animal Welfare Regulations. Briefly, timed pregnant dams were received from Jackson Laboratories and whole brains were removed and plated into Hank's (Life Technologies) solution for dissection (10uM of MgCl 2 ; 7uM of HEPES; 2 mM glutamine; 100ug/mL penicillin, 100U/mL streptomycin were also added). Cortex was then cut and dissociated by a 10-minute trypsin treatment. Then, 500,000 cortical cells were placed on 6-well Poly-D-Lysine-coated tissue culture plates and maintained in serum-free medium (medium component: neurobasal medium (Life Technologies) containing 1X B27 supplement (Life Technologies), 2 mM glutamine, 100ug/mL penicillin, 100U/mL streptomycin). Myocyte-specific enhancer factor 2a (Mef2a) Transcription factor Nuclear Rare single gene mutation in downstream targets associated with Autism symptom domains [32] Myocyte-specific enhancer factor 2d (Mef2d) Transcription factor Nuclear Rare single gene mutation in downstream targets associated with Autism symptom domains [32] Fragile X mental retardation 1 (Fmr1) RNA binding protein Nuclear Causes Fragile X, which shares some symptom domains with autism [38] Neuroligin-1 (Nlgn1) Synaptic remodeling Synaptic Rare single gene mutation associated with autism symptom domains [30] Neuroligin-3 (Nlgn3) Synaptic remodeling Synaptic Rare single gene mutation associated with autism symptom domains [31] Phosphatase and tensin homolog (Pten) Regulator of the cell cycle Nuclear and synaptic Causes Cowden syndrome which shares some symptom domains with autism [26,39] SH3 and multiple ankyrin repeat domains 3 (Shank3) Scaffold protein Synaptic Rare single gene mutation associated with autism symptom domains [12,29] Plate-randomized, quadruplicate cortical cultures were transduced at 2 days in vitro (DIV2) at an optimized MOI of 3.0. Lentiviral particles remained for 6 hours, after which, particles were removed and replaced with conditioned complete neurobasal medium. Cultures were allowed to mature for an additional 14 days post transduction (DIV16), at which time, total RNA was isolated (described below). Hairpin validation For each gene target, five unique shRNA targeting lentiviral constructs were generated as described above, along with an shRNA control (designed against luciferase), and used to produce small-scale lentiviral stocks. Viral stocks were used to transduce primary cortical neuronal cultures (see primary neuronal cultures and transduction methods) on DIV2 and cells were grown in culture an additional 7 to 10 days. Total RNA and protein were isolated from replicate cultures. Quantitative PCR (qPCR) ( Figure 1) and western blot (Additional file 1: Figure S1) was performed to validate a minimum knockdown level of 75% at the mRNA (described below) and protein levels for all hairpin constructs used in study. Glyceraldehyde 3-phosphate dehydrogenase (Gapdh) levels were monitored at both the RNA and protein levels as a control. The best-performing hairpin for each gene was carried forward for genome-wide expression analysis. RNA isolation, cDNA synthesis and qPCR Total RNA was isolated utilizing the Qiagen (Germantown, MD, USA) RNeasy mini total RNA isolation kit according to manufacturer's instructions. RNA quality was validated utilizing a NanoDrop spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA), assuring at least a 2.0 260/280 ratio was obtained. cDNA was generated from 1 ug total RNA using the Life Technologies High Capacity RNA-to-cDNA kit (number 4390716) according to manufacturer's instructions. Prior to the Affymetrix (Santa Clara, CA, USA) Gene Chip analysis, qPCR for the target gene was performed on quadruplicate replicates 15 ngs RNA-equivalent cDNA to ensure knockdown. Only samples showing acceptable knockdown (>75% knockdown by mRNA) were submitted for gene chip analysis. Microarray hybridization and quality control analysis Total RNA was hybridized to Affymetrix Mouse 430_2 microarrays at Gene Logic (Gaithersburg, MD, USA). RNA degradation plots were analyzed for quality control purposes. Four samples did not pass quality control (QC) and were omitted from further analysis (one each from the Mef2d, Nlgn1, Shank3, and non-transduced groups). The raw data files were then normalized using robust multi-array averaging (RMA) [40]. Hierarchical clustering by positive correlation (Ward linkage) was performed in Genedata Expressionist. Prior to statistical analysis, probe sets with _x designations were excluded for potential lack of specificity. Additional probe sets were excluded if absolute expression was <50 for all samples (expression was considered absent), resulting in 24,343 probe sets for statistical analysis. Gene expression for all sample types was analyzed on the log2 scale. Linear models were used to calculate P-values between the groups of interest. The linear model t-statistics were regularized using the moderated-t approach of Smyth [41]. Adjustment of Figure 1 shRNA efficiently knocks down RNA levels of target genes. The bars representing the Luciferase-targeting control short-hairpin (sh) RNA-treated neuronal samples are the average values of all target genes. There was no significant difference between the untreated cells and Luciferase shRNA-treated cells for any of the targeted genes. P-values was performed according to Benjamini and Hochberg [42] to control for multiplicity of testing. Each set of microarrays from a shRNA experiment treatment group were compared to the set of microarrays from the luciferase shRNA control set. Probe sets with false discovery rate (FDR)-corrected P-value ≤0.05 and ≥1.5-fold change were identified for each treatment group for pathway analysis, as the historic RT-PCR confirmation rate of microarray data fitting these criteria is approximately 70% (Additional file 2: Table S1). Overlap with a recently published autism gene interactome [20] was performed for all treatment groups. All primary microarray data from this experiment are available in the Gene Expression Omnibus [GEO:GSE47150]. Bioinformatics analysis of gene expression data Analyses of gene lists from the miRNA experiments were performed using either Nextbio™ software (Santa Clara, CA, USA, www.nextbio.com), Gene Sensor Suite (GSS), or the causal reasoning engine (CRE). The NextBio software uses a modified form of the gene set enrichment algorithm to identify important pathways and other ontologies [43]. All analyses done with NextBio were done utilizing the default parameters. NextBio pathway analysis utilized the pathways compiled by the Broad Institutes gene set enrichment analysis (GSEA) application as part of their molecular signatures database, MSigDB [44]. Related tissues were identified from NextBio's transcriptional profiles for over 6,000 publically available studies. The GSS application identifies significantly enriched pathways using Fisher's exact test and corrected for multiple testing using Q-value [45,46]. GSS pathways were generated from Ingenuity pathways from October 2010 (Ingenuity® Systems (Redwood City, CA, USA), www.ingenuity.com). The CRE algorithm uses multiple statistical parameters to assess the similarity to upstream effectors and their downstream responses to interpret measured gene expression changes [47,48]. Simply stated, the CRE can be thought of as an enhanced type of gene set enrichment analysis (GSEA). Causal statements were curated from the biomedical literature (Ingenuity and Selventa knowledge bases) in the form of, X (increases or decreases) Y, such that X and Y are measured biological quantities. These quantities can consist of multiple types, including protein modifications, mRNA levels, biological processes and/or chemical compound treatments. The combined knowledge base is then interrogated with the microarray transcriptomic data to infer upstream events (called hypotheses). The CRE algorithm generates statistical stringency by employing two primary methods. First, enrichment of all possible transcripts for the hypothesis is measured, a method shared in common with GSS and GSEA. Second, the method of correctness of the hypothesis is calculated, and is simply the difference of having the correct direction minus the incorrect transcripts observed. There are two advantages in using these methods in the CRE. The first advantage is that it is a specific molecular interaction of the hypothesis that is being evaluated. Second, the directionality of the interaction within the hypothesis is retained by using the correctness parameter. P-values were generated and cutoffs were applied using the following filters: correctness P-value <0.05, enrichment P-value <0.05, minimum number of correctly explained gene expression changes ≥3, percent correctly explained gene expression changes ≥60%, ranking score <100. The hypotheses were deciphered and visualized using the Causal Reasoning Browser, a Java-based plugin for the open source biomolecular interaction viewer Cytoscape (www.cytoscape.org) [49]. BDNF quantitation Neurons were treated in 24-well plates with blank media or shRNA against luciferase, Fmr1, or Mecp2 in randomized wells across two plates (n = 12 per condition). For protein analysis, neurons were lysed in 20 mM TrisHCl (pH 7), 137 mM NaCl, 1% NP40, 10% glycerol, 1 mM PMSF, 10 μg/mL aprotinin, 1 μg/mL leupeptin, and 0.5 mM sodium orthovanadate. Lysates were centrifuged at 14,000 × g for 30 minutes at 4°C. Supernatants were stored at −80°C until assay. BDNF levels were measured using a modified version of the Promega (Madison, WI, USA) BDNF Emax® Immunoassay system (G7611). Half-volume 96-well ELISA plates (Costar®; Corning, Lowell, MA, USA) were coated with 50 μl anti-BDNF mAb at 1:1000 dilution in 0.025 M sodium bicarbonate and 0.025 M sodium carbonate, sealed, and stored at 4°C overnight. Plates were washed four times with PBS containing 0.05% Tween20, then blocked for 2 hours at room temperature with 130 μl/well Promega blocking buffer (G3311). Samples and standards were prepared in blocking buffer (1:4 dilution), then loaded onto the plates (50 μl) following a wash step. Plates were sealed and stored at 4°C. On the third day plates were washed and incubated with 50 μl/well anti-human BDNF pAb at 1:500 dilution in blocking buffer for 2 hours at room temperature. Plates were washed again and incubated with 50 μl anti-IgY horseradish peroxidase conjugate at 1:200 dilution in blocking buffer for 1 hour at room temperature. Following a final wash, 50 μl TMB solution was added to each well. The reaction was stopped with 1 N HCl after 10 minutes, and 450 nm optical densities were read on a Spectramax plate reader (Molecular Devices, Sunnyvale, CA, USA). Samples were interpolated off of a standard curve fit by a fourth order polynomial equation. Interpolated BDNF levels were normalized to total protein (DC Protein Assay Kit II, Bio-Rad, Hercules, CA, USA). GraphPad Prism 5.0 was used to perform the Kruskal-Wallis test followed by the Dunn test for multiple comparisons, to determine statistically significant changes (P-value <0.05). Confirmation of knockdown Prior to transcriptomic analysis, individual RNA samples were confirmed for relative knockdown by quantitative RT-PCR. Average knockdown of replicate samples (Figure 1) for the candidate genes were as follows: (89%), Pten (95%) and Shank3 (90%). Knockdown was normalized to a single untransduced cortical neuronal sample (detected message levels were relative to Gapdh). All individual samples showed at least a 75% knockdown of target gene expression providing high confidence that the pathways under investigation were being significantly perturbed. Additional experiments (Additional file 1: Figure S1) indicated that protein levels for all gene products were decreased in conjunction with lentiviral-mediated RNA knockdown in the primary neurons. Evaluation of differentially expressed genes Expression values for each of the shRNA-targeted genes as determined by the Affymetrix GeneChips correlated well with values determined by RT-PCR (Figure 1 -Affymetrix data from shRNA-treated samples were compared to the untransfected control for purposes of the figure). Hierarchical clustering of normalized data revealed tight correlation among biological replicates, with the exception of Mef2a, in which one sample was separated from the rest (Figure 2). Pten was the most distinct treatment group, lying in its own branch of the tree. The next most isolated treatment group was with Mecp2 knockdown (Additional file 3: Table S2). These treatments produced the most numerous changes in gene expression amongst all the hairpins. The total number of probe sets identified as significantly different from the luciferase control (>1.5 fold at P <0.05) in each condition were as follows: Fmr1 (2,395), Mef2d (2,736), Mef2a (1,059), Mecp2 (3,967), Nlgn1 (1,230), Nlgn3 (2,224), Pten (3,653), and Shank3 (1,445). Comparison of the luciferase shRNA versus the untransduced control revealed the smallest number of significant changes -997. As an early determination of the relevance of cell-culture knockdown to the known molecular biology of ASD, the current datasets were evaluated for enrichment in an ASD gene interactome established by Sakai et al. [20]. Although the luciferase versus blank condition was not significantly enriched for genes in this interactome, Fmr1, Mecp2, Mef2a, Mef2d, Nlgn1, Pten, and Shank3 shRNA transcriptomes all showed significant overlap ( Table 3). The most frequently identified ASD interactome gene was CAMK2A, which was upregulated by Fmr1 shRNA, but downregulated by all of the other ASD gene shRNA targets. NextBio detection of related transcriptional profiles The Nextbio database allows for comparison of transcriptional profiles between datasets and transcriptional profiles for over 6,000 publically available studies. The most highly correlated datasets for any of the ASD gene shRNA profiles were other ASD shRNA profiles from this experiment. As a control, the Mef2a and Mef2d profiles were compared against a published study in which the same hippocampal neurons were transduced with both Mef2a and Mef2d [32]. The published study showed significant positive correlation with the present Mef2a and Mef2d datasets, with 107 genes in common with Mef2a profile and 283 genes in common with the Mef2d. Similarly, a comparison with microarray analysis of cortex from Mecp2 knockout mice showed significant overlap with this Mecp2 shRNA transcriptional profile (445 genes in common, P = 1 × 10 -8 ) [6]. The most highly correlated publically available transcriptional profiles for the remaining ASD-related genes came from comparisons of mouse brains at various postnatal ages to embryonic day 14.5 [50] or up to birth [51]. A time course of primary mouse hippocampal neurons in vitro [52] was also correlated with all shRNA treatments (data not shown). All of these developmental datasets showed significant inverse correlation with all shRNA treatments, including luciferase. Given the number of activity-dependent genes affected by shRNA treatment, the correlations with developmental datasets suggest that lentiviral gene delivery may have nonspecifically altered the development of the mouse neurons. Therefore, care was taken to subtract any changes observed in the lentiviral-treated cells from the other datasets for all analyses in an attempt to minimize the impact of this potential artifact. Pathway analysis NextBio analysis of the MSigDB pathways yielded a large number of canonical pathways significantly enriched in one or more treatment group. Three of these pathways were significantly enriched in the luciferase versus blank comparison but not in any other dataset. In addition, the luciferase versus blank dataset yielded a number of pathways in common with other shRNA datasets. These pathways were considered nonspecific and excluded from further analysis. After excluding these pathways, 256 canonical pathways were significantly enriched in one or more treatment groups. Many of these pathways were affected by more than one condition; 26 pathways were significantly enriched by 5 or more ASD shRNA datasets and the top 15 most conserved are shown in Table 4. The most frequently enriched pathway was the Neurotrophic tyrosine kinase receptor type 1(TrkA) receptor pathway, in which all datasets had a significant number of downregulated genes except Pten, which had a significant number of upregulated genes. Other pathways affected by multiple ASD gene shRNA targets included signaling pathways related to additional genes implicated in ASD, such as Neuregulin, Mammalian Target Of Rapamycin (mTOR), and Reelin. ASD gene count, number of conditions in which the pathway was significantly affected (that is, the number of different shRNAs that altered that particular pathway); Up, numbers of member genes altered in each pathway at least 1.5-fold broken out by positive fold-changes; Down, numbers of member genes altered in each pathway at least 1.5-fold broken out by negative fold-changes; ASD, autism spectrum disorder; NFKB, Nuclear factor KB; TrkA, Neurotrophic tyrosine kinase receptor type 1; PGC1A, Peroxisome proliferator-activated receptor gamma coactivator 1-A ERBB3, Receptor tyrosine-protein kinase erbB-3. GenSensor analysis of the Ingenuity pathways resulted in 114 canonical pathways which were significantly overrepresented in one or more of the ASD shRNA treatment groups. The top fifteen most conserved pathways are show in Table 5 (see Additional file 4: Table S3 for the complete list). In the pathways found only in the ASD shRNA treated samples, there are a number of pathways related to neuronal signaling, in particular to cyclic AMP signaling. Thirteen of these pathways were significantly enriched in the luciferase versus blank comparison (yellow highlighted). These pathways may indicate general effects of the shRNA delivery system on the neuron cell culture. However, because GSS does not consider the magnitude of gene expression change, these results do not preclude real treatmentrelated effects on these pathways above the background levels induced by the lentivirus. Only two pathways were found to be enriched solely in the luciferase versus blank comparison. Numbers are odds ranking of the pathways in each shRNA experiment. All pathways were sorted by number of shRNA experiments in which they appear (high to low) followed by average ranking in experiments (low to high). Italicized pathways were also enriched in the luciferase control experiment. GNRH, gonadotrophin-releasing hormone; THOP, Thimet oligopeptidase 1; NFAT, Nuclear factor of activated T-cells. shared between the highest number of experiments is shown in Figure 3 (See Additional file 5: Table S4 for the complete list). There was no hypothesis that was seen in all eight experimental conditions and not in the blank. Looking at the experiments broadly, two groups emerge based on the conservation. Mecp2, Mef2d, Nlgn3, Mef2a, Nlgn1 and Shank3 share similar hypotheses and were more dynamic, generating 3 to 5 times more hypotheses. In contrast, the same hypotheses are not seen being implicated for the Fmr1 and Pten experiments, with the latter experiment appearing quite different than the rest. The predicted hypotheses are overwhelmingly downregulated, with 86% (230 of 268) with the respective order of contribution being 87% (Mecp2), 89% (Mef2d), 84% (Nlgn3), 86% (Mef2a), 87% (Mef2a), 76% (Nlgn1), and 79% (Fmr1); Pten again is the exception with only 58%. Figure 4 is a composite of the most conserved hypotheses generated by CRE for the seven concordant treatment groups. Recurring hypotheses are highlighted with circles. The central hubs of the network are cyclic AMP and the extracellular signal-regulated kinase (ERK)1/2 family, which are directly connected to seven and eight primary hypotheses respectively. Confirmation of BDNF protein response Given that multiple hypotheses predicted from multiple target-knockdown datasets converge on Bdnf, regulation of Bdnf could play a central role in ASD pathobiology. In order to confirm that these predicted changes in BDNF were accurate and that the transcriptional changes measured translated to the protein level, the two shRNA conditions in which Bdnf mRNA was most robustly altered (Fmr1 and Mecp2) were evaluated for impacts on BDNF protein. Neurons were treated in the same manner as for the microarray study, and lysates were harvested and analyzed by ELISA for BDNF. As predicted by mRNA levels, the luciferase shRNA construct alone significantly lowered BDNF (P-value <0.05) ( Figure 5). Mecp2 shRNA further reduced BDNF levels, while Fmr1 shRNA significantly increased BDNF levels relative to luciferase shRNA (P-value <0.05). Discussion ASD is a neurological disorder with a strong genetic component that has been linked to a number of gene defects. These genes have a broad range of activities, ranging from membrane receptors and scaffold proteins Figure 3 Diverse set of ASD-associated genes produce similar pathway-level perturbations when knocked down. A total of 269 hypotheses were observed in at least three of the experiments and not in the blank. Only the hypotheses that were observed in at least 6 of the treatment conditions are included in the figure (red squares indicate that the hypothesis was identified for that experimental condition). Additional file 5: Table S4 list contains a full list of the 269 hypotheses. The notation (+) indicates the hypothesis is predicted to be upregulated, and (−) indicates it is predicted to be downregulated. Names highlighted in orange are part of the molecular interaction network (Figure 4). to metabolic regulators and transcription factors [25,35]. Despite this diversity, ASD patients manifest with similar behavioral and neuronal phenotypes, albeit with different severities. This commonality of neurological phenotype suggests that the genetic defects may act through a limited set of pathways. In this report, we employed shRNA knockdown of eight ASD relevant genes in neuronal culture to explore the downstream effects and identify common pathways or transcriptional signatures. Following microarray analysis of all samples, we performed cluster analysis on the intensity values. As expected, samples clustered by treatment group, demonstrating an overall consistency and quality of the knockdown experiment and subsequent gene expression quantitation. It also illustrates the distinctiveness of the downstream expression effects of knockdown of individual genes. Knockdown of Pten and Mecp2 had the most dramatic effects on gene expression. Given Pten's broad role in numerous cellular processes and Mecp2's role as a transcription factor, these results were not unexpected. For example, mutations in Pten have been linked not only to ASD but also cancer and diabetes [53,54]. Fragile X mental retardation protein (FMRP), the protein product of Fmr1, has been shown to interact with a larger number of target proteins in relation to dendritic control of translation. A list of FMRP target proteins showed significant enrichment in the transcriptional profiles of shRNA for not only Fmr1, but also Mecp2, Pten, Shank3, Nlgn1 and Nlgn3 [25]. We further compared the genes affected in one or more knockdown experiments to a list of ASD interactome genes [20]. This comparison indicated that knockdown of the eight ASD genes resulted in changes to a significant number of ASD interactome genes and the genes affected by the luciferase shRNA condition had little overlap with the ASD genes (Table 3). This control comparison is important, as other groups have reported nonspecific adverse effects of other shRNA and siRNA constructs [55,56]. The luciferase shRNA versus untransduced comparison yielded almost 1,000 differentially expressed transcripts, with an impact on BDNF measured at the protein level. Thus, by identifying the changes in the luciferase shRNA versus untransduced experiments and subtracting those, the subsequent pathway analyses could focus on pathways that were specifically targeted by knockdown of the ASD-relevant genes and not identify artifacts of the transduction. We next analyzed the gene lists from the shRNA experiments by two pathway analysis approaches to obtain different perspectives on the data. The most prominent pathways revealed through analysis with NextBio were a number of pathways related to neurologic signaling and function (Table 4). Secondarily, NextBio indicated that several pathways involved general cellular metabolism and growth were also affected. One prominent pathway, the Peroxisome proliferator-activated receptor gamma coactivator 1-A (PGC1A) pathway, is based on the MSigDB's version of BioCarta's pathway and contains Mef2A and several calcium-dependent kinases, which show gene expression changes in all shRNA experiments. One aspect of the NextBio analysis is that directionality of change (that is, upregulated or downregulated) is reported. The majority of the pathways are downregulated with ASD shRNA knockdown, suggesting that the genes we chose for this work are needed for the expression of these pathways and thereby their activity. Pathway analysis with GenSensor also identified a number of pathways related to neuronal signaling and function ( Table 5). As with the NextBio analysis, several growth and metabolism pathways were also affected. During an examination of the individual pathways identified by the two pathway analysis methods, we noted a recurring involvement of the mitogen-activated protein kinase kinase (MEK)/ERK signaling pathway. These effects would occur either directly through a kinase signaling cascade (downstream of BDNF/TRK) or via cAMP (as in the case of the dopamine and serotonin G-protein coupled receptors). To further investigate this potential commonality, we employed CRE analysis to identify potential underlying mechanisms (CRE hypotheses) in the shRNA datasets. Unlike pathway analysis, which identifies pathways with altered gene expression, CRE analysis predicts potential mechanisms behind gene changes based on the concordance of the number of genes that change expression, and the directionality of that change [47]. The results of CRE analysis are interlinked hypotheses of potential driving mechanisms or experimental treatments that exhibit similar gene changes. It is interesting that three of the eight most conserved hypotheses have a biological function suggestive of growth and/or immune function, suggesting similar driving mechanisms ( Figure 3). Likewise, there are highly conserved hypotheses involved with neurogenesis, synaptic activity and differentiation, as expected, although not mutually exclusive. Choosing the Mef2d experiment as a representative of the six most conserved shRNA treatments, the top ranking clusters can be connected as a molecular interaction map with (Figure 4), cyclic AMP and ERK serving as dual hubs of the network directly connecting seven and eight related hypotheses respectively. Three of the experiments, Mecp2, Mef2d, and Nlgn1, shared 11 of the 12 hypotheses in the network. As more experiments are included, the shared number of hypotheses decreases, for example, the light blue grouping of six experiments including Shank3 is based on four hypotheses. In addition to this work, other work directly or indirectly supports a role for ERK signaling in the development of ASD. For example, maternal use of one of several different classes of drugs relevant to ERK modulation has been reported to increase the risk of having children born with ASD [57]. Cocaine use during pregnancy has been reported to increase the rate of autism by 11%. Cocaine use has also been shown to alter dopamine-induced phosphorylation of ERK via cAMP [58,59]. Recently, Hoffmann et al. showed that chronic cocaine use in rats can lead to attenuated ERK signaling [60]. Chronic maternal cocaine use might thereby attenuate ERK signaling in the fetus. Similarly, mothers taking valproic acid, an inhibitor of gamma-aminobutyric acid (GABA) function, have been demonstrated to have an increased risk of have children with autism [61]. As with cocaine, valproic acid activates ERK signaling [62]. Zou et al. demonstrated that RAS/RAF/ERK1/2 signaling was upregulated in the brains of the BTBR mouse model of autism [63]. Recently, the upregulation of this pathway (and of ERK5) has been shown to occur in the brains of autistic subjects [64]. Although misregulation of ERK does appear to be a common feature of ASD, the observed directionality of that misregulation has been contradictory. In the case of Rett syndrome, ERK signaling through the BDNF pathway in particular is reduced. BDNF levels are reduced in Mecp2-null mice, and exogenous BDNF has been shown to rescue deficits due to Mecp2 deficiency [7,9]. In human Rett syndrome patients, a Val/Met polymorphism in BDNF has been associated with disease severity [18]. In the present study, Mecp2 shRNA produced a significant reduction in Bdnf at both the mRNA and protein level, both of which were inversely affected by Fmr1 knockout. Given the diverse functions of BDNF in neurons, it would be interesting to determine in follow-up studies whether inverse functional outcomes may be observed with these treatments. Based on the experimental data presented here and previously existing data, we have put together a pathway model to show that the transcriptional regulation exerted by a diverse set of ASD-associated genes converges on ERK signaling. A central role for ERK signaling would explain many of the features associated with ASD. Early work on the ERK proteins described these as microtubuleassociated protein 2 kinases, and were shown to phosphorylate MAP2 kinases, proteins known to be involved in neuronal architecture [65][66][67]. Later work demonstrated that ERK plays a critical role in microtubule formation and thereby to axon/dendrite formation [68,69]. A review article by Hoogenraad and Akhmanova has summarized the criticality of microtubules in synaptic plasticity [70]. Mutations that lead to altered ERK activity would then be expected to have alterations in axon extension and/or retraction and thereby, synaptic plasticity. Mazzucchelli et al. found that ERK1-knockout mice exhibit enhanced synaptic plasticity, most likely through the compensatory activation of ERK2 [71]. Voineagu et al. recently reported that the expression differences between the temporal and frontal lobes are significantly attenuated in individuals with autism [22]. They further suggested that this lack of differentiation is the mechanism behind the lack of long-range axonal connections and the decreased myelin thickness in autistic prefrontal lobes as reported by Zikopoulos and Barbas [21,22]. In some instances altered ERK activity could interfere with neuroglia wrapping of neuritis to form the myelin sheath. Newbern et al. recently reported that ablation of ERK1/2 in Schwann cell precursors resulted in hypomyelination of axons [72]. Conclusions A large number of genetic mutations and CNVs have been linked to ASD. The implicated genes span a variety of functions and pathways [25,35]. Despite this diversity, defects in neuronal plasticity and dendrite morphology are commonly associated with this disease. In this report, we utilized shRNA knockdown of eight ASD-associated genes to examine downstream transcriptional alterations and to look for pathway-level commonalities. An underlying assumption is that dysregulation of these genes in primary mouse cortical neurons produce transcriptional alterations robust enough to be detected in lysates of these mixed cultures. As it is difficult in such an experiment to identify a single causal gene, analyzing changes at the pathway level mitigates the reliance on just one or two genes. Pathway analysis by two different approaches both identified alterations in a number of conserved neuronal signaling pathways. Detailed examination of those pathways emphasized alterations to the cAMP and ERK signaling pathways. These pathways would be good starting points for further functional characterization of common downstream neuronal phenotypes following known-down of ASD-associated genes. For example, cAMP reporter assays and phosphoproteomic analysis of ERK pathway regulation would be informative in searching for common intervention points that might reverse the phenotypes caused by the ASD gene disruption. The prospect that multiple genes tied to a single disorder converge on a common set of pathways provides hope that therapeutics can be developed that will be efficacious in a patient population with a heterogeneous genetic background. Additional files Additional file 1: Figure S1. Western blot analysis of protein knockdown. Additional file 2: Table S1. Full gene expression data. Additional file 3: Table S2. Comparison of individual gene expression changes between knockdown datasets. Genes that had fold-changes >1.5 versus control with a P-value <0.5 were compared between experiments for overlap. The number of genes found in both knockdown experiments for any combination is shown on the table. Additional file 4: Table S3. Gene set enrichment analysis utilizing Ingenuity pathways. Additional file 5: Table S4. A comparison of common causal reasoning engine (CRE) hypotheses from the nine experiments A total of 269 hypotheses were observed in at least three of the experiments and not in the blank. The notation (+) indicates the hypothesis is predicted to be upregulated, and (−) predicted it to be downregulated. Names highlighted in orange are part of the molecular interaction network (Figure 4). An observed hypothesis (indicated by a red box) satisfies the following filters: correctness P-value <0.05, enrichment P-value <0.05, minimum number of correctly explained gene expression changes ≥3, percent correctly explained gene expression changes ≥60%, ranking score <100. Competing interests All authors were employees of Pfizer Global Research and Development, which funded this study, at the time the experimental work was conducted. Authors' contributions EG carried out the knockdown experiments, including the qPCR analysis and the preparation of samples for transcriptomic analysis. TAL, MMG, and JEF conducted the bioinformatic analysis of the transcriptomic data. TAL, LWF, and DTS conceived of the original project and experimental design. MTP coordinated data analysis and the preparation of the manuscript. EG, TAL, MMG, JEF, and MTP contributed to drafting of the manuscript. All authors read and approved the final manuscript.
9,557.8
2013-11-15T00:00:00.000
[ "Biology", "Psychology" ]
Lipid Raft, Regulator of Plasmodesmal Callose Homeostasis The specialized plasma membrane microdomains known as lipid rafts are enriched by sterols and sphingolipids. Lipid rafts facilitate cellular signal transduction by controlling the assembly of signaling molecules and membrane protein trafficking. Another specialized compartment of plant cells, the plasmodesmata (PD), which regulates the symplasmic intercellular movement of certain molecules between adjacent cells, also contains a phospholipid bilayer membrane. The dynamic permeability of plasmodesmata (PDs) is highly controlled by plasmodesmata callose (PDC), which is synthesized by CALLOSE SYNTHASES (CalS) and degraded by β-1,3-GLUCANASES (BGs). In recent studies, remarkable observations regarding the correlation between lipid raft formation and symplasmic intracellular trafficking have been reported, and the PDC has been suggested to be the regulator of the size exclusion limit of PDs. It has been suggested that the alteration of lipid raft substances impairs PDC homeostasis, subsequently affecting PD functions. In this review, we discuss the substantial role of membrane lipid rafts in PDC homeostasis and provide avenues for understanding the fundamental behavior of the lipid raft–processed PDC. Lipid Raft Components The plasma membrane is a biological compartment shielding the contents and substances of the entire cell. The plasma membrane allows the formation of a complex intracellular organization, enabling cellular activities to be substantially regulated. Therefore, its unique structure plays a critical role in biological processes including the transduction of various signals and serves as a selectively permeable barrier to prevent the unrestricted exchange of molecules from one side to the other [1]. Furthermore, this compartment, which encircles the cell, mediates intercellular interactions for the exchange of materials and information between cells [2]. A major feature of the plasma membrane is the lipid bilayer, whose function in living organisms has been determined in detail, allowing for the emergence of fundamental concepts concerning the function of the cellular membrane [3,4]. Some domains of the plasma membrane phospholipid bilayer are enriched in specific lipids that reside in the plane of the membrane, and these domains are termed lipid rafts ( Figures 1A and 2A) [3,5,6]. These specialized domains are defined as dynamic and small (10 to 200 nm) plasma membrane domains that are enriched in sterol and typical phytosphingolipids, such as glycosylinositolphosphoceramides, and that contain a low amount of unsaturated phospholipids [5,[7][8][9][10][11]. These substances are packed together to form a highly ordered structure distinct from the surrounding disordered area to induce lateral heterogeneity, enabling important biological functions related to membrane signaling and the stabilization of protein-protein, protein-lipid and lipid-lipid interactions [7,8,10]. The presence of lipid rafts in plants was initially identified by analyzing a low-density, Triton X-100-insoluble fraction isolated from identified by analyzing a low-density, Triton X-100-insoluble fraction isolated from tobacco (Nicotiana tabacum), which exhibited a protein composition obviously different from that of the general plasma membrane (PM), with an excess of signaling proteins such as heterotrimeric Gproteins and glycosylphosphatidylinositol (GPI)-anchored proteins [12][13][14]. In addition, lipid raft analyses have also been conducted in Arabidopsis thaliana along with the characterization of particular lipid compositions from detergent-insoluble membranes (DIMs), providing a new understanding of plant lipid rafts [14,15]. Hypothetical model of how ceramides could control PD permeability. Sterol and sphingolipid biosynthesis begins at the endoplasmic reticulum (ER), and these molecules are subsequently transported to the plasma membrane by the vesicle-mediated exocytosis pathway to form lipid rafts, preferentially at plasmodesma (PD) membranes. Eventually, PD membranes are enriched by lipid raft formation (A). The excessive ceramide in acd5 and erh1 mutant plants enables the salicylic acid (SA)-mediated upregulation of PD LOCATED PROTEIN (PDLP) and CALLOSE SYNTHASE1 (CalS1) transcript levels to induce plasmodesmata callose (PDC) accumulation (B); SAmediated PDLP and CalS1 activation can also be directly upregulated during infection with a biotrophic pathogen such as powdery mildew (B). Blue arrows, trafficking; black arrows, signaling; Question mark, not enough evidence that explains how the lipid raft-enriched vesicle controls plasmodesmata callose directly. Hypothetical model of how ceramides could control PD permeability. Sterol and sphingolipid biosynthesis begins at the endoplasmic reticulum (ER), and these molecules are subsequently transported to the plasma membrane by the vesicle-mediated exocytosis pathway to form lipid rafts, preferentially at plasmodesma (PD) membranes. Eventually, PD membranes are enriched by lipid raft formation (A). The excessive ceramide in acd5 and erh1 mutant plants enables the salicylic acid (SA)-mediated upregulation of PD LOCATED PROTEIN (PDLP) and CALLOSE SYNTHASE1 (CalS1) transcript levels to induce plasmodesmata callose (PDC) accumulation (B); SA-mediated PDLP and CalS1 activation can also be directly upregulated during infection with a biotrophic pathogen such as powdery mildew (B). Blue arrows, trafficking; black arrows, signaling; Question mark, not enough evidence that explains how the lipid raft-enriched vesicle controls plasmodesmata callose directly. Some membrane proteins, such as glycosylphosphatidylinositol-anchored proteins and acylated cytosolic proteins, show a preferential association with lipid rafts, thereby facilitating various biological functions and dynamic processes, including membrane trafficking, protein sorting, cell polarity and signal transduction [16][17][18]. The plasma membrane has exofacial and cytofacial leaflets, which cause the phospholipid bilayer to differ in electrical charge, fluidity and the activation of certain proteins. Exofacial proteins anchored by glycophosphatidylinositol (GPI) anchors preferentially localize into lipid rafts, whereas cytofacial proteins are modified by saturated fatty acids such as palmitoyl or myristoyl groups [17,[19][20][21][22]. Fluorescence microscopy has allowed for the observation of phase separation in lipid membranes [23], and fluorescently labeled lipids or lipophilic dyes have been commonly used as imaging agents [24]. GPI-anchored PD proteins such as plasmodesmata callose binding (PDCB) proteins and plasmodesmal-localized β-1,3-glucanases (PDBGs) are synthesized in the endoplasmic reticulum (ER). These two proteins may require lipid raft-enriched vesicle-mediated exocytosis machinery to reach both the PD plasma membrane and the cellular plasma membrane as their target locations (A). An excessive sterol amount is able to induce the lipid raft-enriched vesicle-mediated exocytosis of PDCBs and PDBGs to regulate symplastic nanochannels by governing plasmodesmata callose (PDC) accumulation (A). The disruption of the sterol biosynthesis pathway with fenpropimorph or lovastatin affects the transport system of GPI-anchored PD proteins, preventing the proper localization of these two proteins (B). Some membrane proteins, such as glycosylphosphatidylinositol-anchored proteins and acylated cytosolic proteins, show a preferential association with lipid rafts, thereby facilitating various biological functions and dynamic processes, including membrane trafficking, protein sorting, cell polarity and signal transduction [16][17][18]. The plasma membrane has exofacial and cytofacial leaflets, which cause the phospholipid bilayer to differ in electrical charge, fluidity and the activation of certain proteins. Exofacial proteins anchored by glycophosphatidylinositol (GPI) anchors preferentially localize into lipid rafts, whereas cytofacial proteins are modified by saturated fatty acids such as palmitoyl or myristoyl groups [17,[19][20][21][22]. Fluorescence microscopy has allowed for the observation of phase separation in lipid membranes [23], and fluorescently labeled lipids or lipophilic dyes have been commonly used as imaging agents [24]. As mentioned above, plant lipid rafts possess an enriched sterol and phytosphingolipid molecules, and therefore the action of plant lipid rafts is highly influenced by sterol and sphingolipid biosynthesis, as the production of these molecules affects plant lipid raft organization and behavior ( Figure 1A). In mammalian cell experiments, cholesterol has been suggested to be an important lipid raft component and has been shown to play a critical role in lipid raft stability and organization [25]. In particular, the depletion or perturbation of cell membrane-associated cholesterol diminishes the functions of intact lipid raft-associated membrane components [18,25]. Sterol molecules vary in prokaryotic and eukaryotic cells [26]. In Arabidopsis thaliana, the most abundant sterols are cholesterol, sitosterol, stigmasterol and campesterol, and among these, sitosterol is the most abundant phytosterol [27]. Sphingolipids are described as ubiquitous components of cellular membranes and Figure 2. Localization of GPI-anchored plasmodesmata (PD) proteins are controlled by lipid rafts. GPI-anchored PD proteins such as plasmodesmata callose binding (PDCB) proteins and plasmodesmal-localized β-1,3-glucanases (PDBGs) are synthesized in the endoplasmic reticulum (ER). These two proteins may require lipid raft-enriched vesicle-mediated exocytosis machinery to reach both the PD plasma membrane and the cellular plasma membrane as their target locations (A). An excessive sterol amount is able to induce the lipid raft-enriched vesicle-mediated exocytosis of PDCBs and PDBGs to regulate symplastic nanochannels by governing plasmodesmata callose (PDC) accumulation (A). The disruption of the sterol biosynthesis pathway with fenpropimorph or lovastatin affects the transport system of GPI-anchored PD proteins, preventing the proper localization of these two proteins (B). As mentioned above, plant lipid rafts possess an enriched sterol and phytosphingolipid molecules, and therefore the action of plant lipid rafts is highly influenced by sterol and sphingolipid biosynthesis, as the production of these molecules affects plant lipid raft organization and behavior ( Figure 1A). In mammalian cell experiments, cholesterol has been suggested to be an important lipid raft component and has been shown to play a critical role in lipid raft stability and organization [25]. In particular, the depletion or perturbation of cell membrane-associated cholesterol diminishes the functions of intact lipid raft-associated membrane components [18,25]. Sterol molecules vary in prokaryotic and eukaryotic cells [26]. In Arabidopsis thaliana, the most abundant sterols are cholesterol, sitosterol, stigmasterol and campesterol, and among these, sitosterol is the most abundant phytosterol [27]. Sphingolipids are described as ubiquitous components of cellular membranes and are composed of a long-chain sphingoid base with one amide-linked fatty acyl chain and a polar head group, and sphingosine is the most prominent long-chain sphingoid base [28,29]. Sphingomyelin is the major phosphosphingolipid found in animal cells; however, it is not detected in plant cells. Intriguingly, glycosylinositolphosphorylceramides (GIPCs) are the most abundant in plant cells, but these have disappeared in animal cells [29]. The Action of Plasmodesmata Callose Plasmodesmata (PDs) are dynamic symplasmic nanochannels that are localized in the plant cell wall and connect the cytoplasm spaces and endoplasmic reticulum compartments of adjacent cells [30][31][32][33][34]. PDs mediate the symplasmic movement of small molecules such as water, ions, small nucleotides, phytohormones and other solutes (amino acids and sugar). Relatively larger molecules, including peptides and small proteins, are also able to be symplasmically moved through PDs [34,35]. In addition, it has been reported that PDs facilitate the cell-to-cell trafficking of homeodomain transcription factors (TFs) and other proteins through an actively regulated process [31,36,37]. Indeed, the active cell-to-cell communication machinery presumably involves protein-protein interactions at PDs to regulate the size exclusion limit (SEL). The PD SEL can be described as the size of the largest molecules that are able to pass through the PD [38]. The size of the PD SEL can be controlled by callose deposition in the neck region of the PD aperture [39][40][41]; therefore, a lack of plasmodesmata callose (PDC) presumably enhances the movement of molecules through PD channels [42]. Callose is a polysaccharide that is produced by callose synthase and degraded by β-1,3-glucanases. Callose is widely found in higher plants, in which it is a component of specialized cell walls or cell wall-associated structures at particular stages of growth and differentiation. Callose plays diverse roles during plant growth and development in order to ensure proper growth. In addition, it is particularly involved in the plant responses to both biotic and abiotic environmental stresses [35,[43][44][45]. Furthermore, biochemical and biological studies of callose have shown that callose plays important roles in plant developmental processes, including cell division, organogenesis, microsporogenesis, pollen germination and tube growth, fertilization, embryogenesis, fruit ripening, seed germination, mobilization of storage reserves in the endosperm of cereal grains, bud dormancy, and responses to wounding, cold, ozone and the other stresses [46][47][48][49][50][51][52][53][54][55]. Twelve GLUCAN SYNTHASE-LIKE (GSL) genes (also known as CALLOSE SYNTHASE (CalS)) have been identified and characterized in Arabidopsis thaliana. Among the 12 callose synthases identified in Arabidopsis, five (CalS10/GSL8, CalS7/GSL7, CalS3/GSL12, CalS1/GSL6 and CalS8/GSL4) have been found to play a direct role in PDC deposition. Correspondingly, previous studies have suggested that CalS10/GSL8 strongly regulates PDC deposition to maintain cell-to-cell permeability in plants [56][57][58], and gsl8 homozygote mutants are seedling-lethal [46,59]. Another callose synthase gene, CalS7/GSL7, is responsible for PDC in the phloem of vascular tissue. The observation of callose deposition at sieve elements in the cals7 mutant reveals that the reduction of PDC deposition during sieve element development and maturation results in a disordered sieve element pattern [58]. Recent studies on the regulation of PDs have also implicated the CalS3/GSL12 gene; CalS3 participates in PDC formation specifically at the stem cell niche and stele [40]. Alterations in vascular patterning are caused by two allelic gain-of-function mutations, cals3-1d and cals3-2d. The overproduction of PDC in the cals3-d mutant impairs molecular trafficking through PDs, especially during root development [60]. Moreover, the disruption of cell-to-cell communication at PDs has also been shown by icals3m, an inducible vector that enables the overexpression of cals3. Loss of symplasmic signaling in the activation of icals3m expression significantly affects root development and the gravitropism response [61]. Since PDC production is involved in the response to some environmental stresses, callose synthase genes controlled by environmental stresses were screened and identified. Two genes, CalS1/GSL6 and CalS8/GSL4, have different roles in responding to certain stresses. CalS1 is necessary to modulate PDC deposition during pathogen infection by activating the salicylic acid pathway, whereas CalS8 is tightly associated with the reactive oxygen species (ROS) produced during wounding to control PDC deposition [62]. Polar auxin transport (PAT) is an active process that controls the distribution of the hormone auxin in plants. PAT is an essential process for forming and maintaining the auxin gradient during the phototropic response. However, effective auxin gradient formation also requires a tight cooperation of PDs. Remarkably, excessive PDC accumulation promotes asymmetric auxin distribution in plants, and eventually the side that has more auxin elongates faster than the other side [56,[63][64][65]. A recent study has shown that the asymmetric distribution of auxin during tropism is perturbed in dsGSL8 RNAi plants, and the seedlings are eventually defective in either phototropism or gravitropism, mainly due to PDC depletion [56]. Sphingolipid and Sterol Biosynthesis Pathway Involved in Plasmodesmata Callose Maintenance As described above, sphingolipids along with sterols are enriched in detergent-resistant membranes termed lipid rafts and have been linked to certain biological processes and cellular activities, including the sorting and trafficking of specific plasma membrane proteins, and possibly function as signaling molecules to initiate programmed cell death in plants. Moreover, phytosphingosine-1-phosphate has been identified to mediate abscisic acid-dependent guard cell closure by transduction through the unique prototypical G-protein α-subunit GPA1 [28]. Recently, the disruption of a sphingolipid ceramide kinase gene has also been characterized as the basis for the enhanced rate of apoptosis in the Arabidopsis thaliana accelerated cell death5 (acd5) mutant, suggesting that levels of ceramides in sphingolipids control programmed cell death in plants. Correspondingly, in response to Botrytis cinerea infection, an acd5 mutant exhibited more severe disease symptoms, smaller papillae, decreased callose deposition and increased apoplastic and mitochondrial ROS compared to the wild-type plants [66]. Moreover, excessive ceramide accumulation in an acer-1 mutant resulted in increased plant susceptibility to Pseudomonas syringae and more sensitivity to salt stress than the wild-type plant [67]. Arabidopsis possesses three genes encoding ceramide synthases with distinct substrate specificities; the LONGEVITY ASSURANCE GENE ONE HOMOLOG1 (LOH1; At3g25540) and LOH3 (At1g19260)-encoded ceramide synthases use very-long-chain fatty acyl-CoA and trihydroxy long-chain base (LCB) substrates, and the LOH2 (At3g19260)-encoded ceramide synthase uses palmitoyl-CoA and dihydroxy LCB substrates [68]. However, LOH2 overexpression resulted in an excessive amount of sphingolipids with C16 fatty acid and dihydroxy LCB ceramides, followed by programmed cell death symptoms and induced salicylic acid (SA) accumulation [69]. In contrast, the enhancing rpw8-mediated hypersensitive response-like cell death 1 (erh1) mutant, exhibiting high ceramide accumulation and the loss of inositol phosphorylceramide synthase (IPCS) activity, showed enhanced RPW8-mediated hypersensitive response-like cell death and resistance to the biotrophic pathogen powdery mildew, which is associated with ceramide accumulation and possibly involves PDC accumulation ( Figure 1B). Correspondingly, in the acd5 mutant, ceramide accumulation also induces programmed cell death and resistance against powdery mildew infection. It seems likely that the excessive ceramide amount in the acd5 and erh1 mutants confers resistance against biotrophic pathogens (powdery mildew) rather than hemibiotrophic pathogens such as Botrytis cinerea. and P. syringae. (Figure 1B) [70]. Since PD permeability is dynamically regulated by PDC, several studies have been conducted to identify proteins that influence PDC turnover and its stabilization at the PD aperture. Furthermore, lipidomic analyses of purified PDs have revealed that PD membr anes contain phospholipids with an excessive amount of saturated lipids compared to the plasma membrane and that GIPCs are predominantly found in PD membranes [6,71]. Endogenous alteration of sterol components can be accomplished by nutritional manipulations in sterol auxotrophic species, such as plants. On the other hand, drug treatment can also be applied to alter sterol biosynthesis profiles. There are also sterol synthesis inhibitors for various steps of sterol biosynthesis ( Figure 2B). Microdomain lipid profiling has indicated that sterol biosynthesis inhibitors affect PDC homeostasis. In Arabidopsis thaliana, the presence of fenpropimorph, which inhibits fecosterol-to-episterol conversion [72], resulted in increased PDC accumulation after 24 h of treatment. Intriguingly, two GPI-anchored plasmodesmata proteins (GPI-APPs), PDCB1 and PDBG2, were also mislocalized in response to fenpropimorph and lovastatin ( Figure 2B) [6]. Alteration of Sphingolipid Homeostasis Controls PDLP5 (PLASMODESMATA-LOCATED PROTEIN 5) Expression through Salicylic Acid (SA)-Dependent Pathway PD-associated proteins have been experimentally described to participate in PD function, thus it is not surprising that proteins involved in PDC turnover also localize to PDs and are often involved in PD regulation. In addition, several studies have implicated many different cellular processes in the alteration of ceramide and sphingolipid metabolisms, and these processes are associated with plant defense pathways [66,67,73] and include salicylic acid (SA) machinery [70] as well as sphingolipid homeostasis [69,74] . For example, a cell death phenotype in acd5 and erh1 is due to an SA-dependent pathway. Abolishing the ACD5 and ERH1 gene functions results in disproportionate ceramide to ceramide-1-phosphate and ceramide to inositol phosphorylceramide (IPC) ratios, respectively, causing a high ceramide concentration [66,70]. Subsequently, this high ceramide concentration initiates SA-mediated programmed cell death by upregulating PDLP5 to protect against biotrophic pathogens ( Figure 1B). Previous studies have shown that PLASMODESMATA-LOCATED PROTEINS (PDLPs) are partially associated with PD channels [75][76][77][78]. In Arabidopsis thaliana, there are eight members of the PDLP family, and these members contain two extracellular DUF26 domains in the N-terminus, accompanied by a transmembrane domain (TMD) and short cytoplasmic tail in the C-terminus [78]. PDLP5 acts as the molecular link between PD function and the initiation of SA-induced programmed cell death. It has been demonstrated that PDLP5 is upregulated in response to SA and that PDLP5 also controls PDC deposition to close PDs in response to SA [76]. Subsequently, the loss of PDLP5 activity results in an enhanced PD permeability phenotype and increased susceptibility to bacterial infection [79]. Moreover, a recent study on how SA-mediated PDLP5 activation regulates the plant immune system showed that CalS1 appears to be a key component in SA-dependent PD regulation. In the presence of SA, the transcript levels of PDLP5 and CalS1 are highly upregulated, whereas the transcript levels of the other CalS genes are not significantly enhanced. SA-mediated PDLP5 activation is required to close PDs by regulating PDC during pathogen infection, and the cals1-1 mutant failed to increase PDC accumulation or change PD permeability when this plant was treated with SA and P. syringae. This result indicates that CalS1 and PDLP5 are strongly associated with PD regulation to control PDC accumulation during SA-mediated immune responses ( Figure 1B) [62]. Plasmodesmal Localization of GPI-Anchored Plasmodesmata Proteins is Regulated by Lipid Rafts Another specific PD-associated protein, PLASMODESMATA CALLOSE BINDING1 (PDCB1), contains a CBM43 functional domain that facilitates callose binding activity, and this protein is therefore located at sites of callose deposition. Recently, PDCB1 was shown to cosegregate with the sphingolipidand sterol-rich microdomains [6]. Based on a structural domain analysis, PDCB1 protein contains a X8 domain that is responsible for its callose binding activity and the glycophosphatidylinositol (GPI) anchor sequence at the C-terminus, so it should be noted that this protein is preferentially localized in lipid rafts (Figure 2A) [71,80]. On the other hand, one class of cellular factors that is responsible for callose turnover is the 1,3-β-D-glucanases (BGs), which contain a GH17 domain to specifically recognize callose. Among 50 BGs that have been characterized in Arabidopsis, some members, including AtBG_papp and PDBG2, possess a predicted C-terminal GPI-anchor attachment motif for targeting to the membrane. During targeting and posttranslational modification, the GPI anchor attachment site is cleaved, and the mature GPI is attached, which is necessary for target localization [6,39]. Recently, using a comparative analysis of PD targeting of the PD and non-PD GPI-anchored proteins, Zavaliev et al. (2016) showed that that GPI modification is necessary and sufficient for PD targeting of both AtBG_papp and PDCB1 [81]. As mentioned above, sterol biosynthesis disruption affects the targeting of the GPI-anchored proteins PDCB1 and PDBG2 to primary PDs and their modulation of PDC accumulation ( Figure 2B), due to defects in lipid raft formation [6]. Conclusions The investigations of lipid rafts have involved diverse biological concepts, especially in plant cells. This unique membrane has been described in detail by research groups to provide an understanding of the roles of plant lipid rafts. Recent study on plant lipid rafts has focused on plasmodesmata callose (PDC) accumulation in the control of symplasmic channels [6]. Lipid raft-modulated PDC accumulation is dependent on sterol and sphingolipid homeostasis, as these two components are used to form lipid rafts. Furthermore, sterol depletion results in the mislocalization of two specific GPI-anchored PD proteins, PDCBs and PDBGs, which regulate PDC deposition and degrade PDC, respectively. The mislocalization of these two GPI-anchored PD proteins results in an excessive amount of PDC deposition. Additionally, disruption in lipid rafts can affect the targeting of other lipid raft-enriched proteins such as Grain setting defect1 (GSD1), Remorins (REMs) and StRemorin1.3 which modulates PDC levels [82][83][84]. It was shown that GSD1 regulates PD conductance by interacting with ACTIN in association with the PDCB [84]. In addition, the existence of sphingolipids in the lipid raft is also required to maintain the equilibrium of certain signaling machineries in plant cell systems. A direct link between the localization of PDCBs and PDBGs and sphingolipids has not yet been elucidated. However, recent study concerning sphingolipid function in the callose turnover has demonstrated the involvement of the salicylic acid (SA) pathway [66]. Alterations to sphingolipids, such as the modulation of ceramide production, enable the activation of SA-upregulated PDLP5 and result in PD closure by increasing PDC deposition as a defense system against powdery mildew infection. However, further study is required to fully understand the mechanism by which sphingolipids control PDC homeostasis. Overall, these insights could be used to develop new hypotheses for studies on the role of plant lipid rafts in PDC turnover and PD regulation.
5,244.4
2017-04-03T00:00:00.000
[ "Biology", "Environmental Science" ]
Rotation of the Stress Tensor in a Westerly Granite Sample During the Triaxial Compression Test We simulated the spatiotemporal modelling of 3D stress and strain distributions during the triaxial compression laboratory test on a westerly granite sample using finite-difference numerical modelling implemented with FLAC3D software. The modelling was performed using a ubiquitous joint constitutive law with strain softening. The applied procedure is capable of reproducing the macroscopic stress and strain evolution in the sample during triaxial deformation until a failure process occurs. In addition, we calculated focal mechanisms of acoustic emission (AE) events and resolved local stress field orientations. This detailed stress information was compared with that from numerical modelling. The comparison was made based on the 3D rotation angle between the cardinal axes of the two stress tensors. To infer the differences in rotation, we applied ANOVA. We identified the two time levels as the plastic deformation phase and the after-failure phase. Additionally, we introduced the bin factor, which describes the location of the rotation scores in the rock sample. The p values of the test statistics F for the bin and phase effects are statistically significant. However, the interaction between them is insignificant. We can, therefore, conclude that there was a significant difference in the time between the rotation means in the particular bins, and we ran post hoc tests to obtain more information where the differences between the groups lie. The largest rotation of the stress field provided by the focal mechanisms of AE events from the numerically calculated stress field is observed in the edge bins, which do not frame the damage zone of the sample. approaches have been proposed to invert earthquake focal mechanisms for stress orientation (Maury et al. 2013), but the most popular are those developed by Michael (1984), Gephart and Forsyth (1984) and Angelier (2002) with extensions proposed by Lund and Slunga (1999), Hardebeck and Michael (2006), Arnold and Townend (2007), Maury et al. (2013), Vavrycuk, (2015 and others. Usually, the more widely used methods obtain similar results for similar data sets. However, the stress inversion results are sensitive to the number of earthquakes, focal mechanism uncertainties, and fault plane variability (e.g., Hardebeck and Hauksson 2001;Bohnhoff et al. 2004;Vavrycuk 2015). To guarantee the high resolution of the stress inversion results, a high number of seismic events with well-constrained focal mechanisms is required in a spatially determined area over a certain time period. These conditions are well satisfied by laboratory experiments, which provide a convenient framework to test the performance of stress inversion techniques in determining the details of stress field variations during loading. Triaxial compression (TC) tests on rock samples performed to analyse rupture mechanics are frequently accompanied by monitoring acoustic emission (AE) activity. The analysis of AE activity allows the spatiotemporal evolution of damage in the sample to be tracked and provides detailed information on fracturing and frictional processes in rock samples subjected to loading (Stanchits et al. 2006). The seismic moment tensor (MT) can be inverted from AE data and then decomposed to describe volumetric (ISO), double-couple (shear, DC) and compensated linear vector dipole (CLVD) strain components. This helps improve the understanding of physical processes taking place within seismic sources and the sample, such as rupture dynamics, fault complexity or the radiation of the seismic energy related to the damage accumulation (Ben-Zion and Ampuero 2009;Castro and Ben-Zion 2013). Based on the MT solutions, the focal mechanisms of AE events are estimated. The stress inversion method provides information about the directions of the three principal stress axes and a measure of the size of the intermediate principal stress, r 2 , relative to the maximum, r 1 , and minimum, r 3 , principal stresses, called the stress ratio R. The stress ratio is very often used to determine temporal local rotations of the stress tensor and to determine the potential processes responsible for these variations. Systematic temporal stress rotations have been observed in reservoirs in relation to fluid injections (Martínez-Garzón et al. 2013, 2014aSchoenball et al. 2014). These stress variations appear as a response to the pore pressure changes and decrease in in situ temperatures by the cold fluid [Jeanne et al. 2015;Yoon 2015]. More recently, stress rotation has been identified in subduction zones in relation to slow slip events (Warren-Smith et al. 2019). These stress changes were interpreted as the accumulation and release of fluid pressure within the subducting oceanic crust, impacting the timing of slow slip event occurrence. Spatiotemporal local stress tensor rotations were also identified before and after large tectonic earthquakes [Hardebeck and Hauksson 2001;Hardebeck 2012;Ickrath 2015], indicating that earthquakes are capable of causing significant stress partitioning along their rupture [e.g., Bohnhoff et al. 2006]. Local rotations of the stress field at a fault are very difficult to detect. They are of a slightly higher order than the error of the stress field estimation. Theoretically stress axes may rotate up to 45°due to an earthquake (Hardebeck and Hauksson 2001), and temporal rotations [ 20°have been observed (e.g. for the Tohoku earthquake, Hasegawa et al. 2011). Different spatial zones may have completely different stress states, so even larger spatial rotations are possible. Therefore, it is important to recognize whether the possible range of stress axis rotations along the fault is statistically significant. Here, we exploit laboratory tests (Kwiatek et al. 2014) to develop a study to characterize the evolution of the stress field along the rupture in a sample during the whole laboratory experiment. We develop a numerical modelling method to reproduce the evolution of stress and strain changes in the classical triaxial fracture experiment performed on westerly granite (WG) samples. We did not intend to reproduce the global stress and stress evolution in the sample but to reproduce the peculiarities of the local stress and strain field. Numerical modelling delivers a more detailed picture of the spatiotemporal evolution of stress and strain in the rock sample, which is typically only available from associated macroscopic measurements (axial load, axial displacement) or point measurements on the sample surface (strain meters). Then, we discuss the implications of our findings in the context of stress rotation monitoring, spatiotemporal stress partitioning along the rupture, and the capability of numerical modelling to capture these stress variabilities. This work contributes to an improved understanding of the physical mechanics underlying the rupturing process and assesses the possibilities to monitor stress heterogeneities in the rupture region. The study aims to provide highlights of answers to the following questions: What happens to the stress field tensor during the experiment? When and which stress rotation pattern may be observed during the preparatory rupture process, coseismic phase and postseismic phase, what is the magnitude of this rotation and is it statistically significant? Does numerical modelling give a reliable representation of local spatiotemporal changes in the stress field orientation? To answer these questions, the numerical modelling results of the spatial stress field orientation are compared with local stress field data inverted from seismic data. The AEderived focal mechanisms (e.g., Kwiatek et al. 2014) are used to invert the local stress field orientation using stress tensor inversion algorithms (Martínez-Garzón et al. 2014a, b). The paper is organized as follows: first, we present the experimental procedure of the triaxial compression test, discuss the observed seismicity and then describe the MT and stress inversion procedure. Next, the numerical simulation based on Itasca FLAC3D, a three-dimensional finite difference method (FDM) software, is presented; this section involves the calibration of the experimental and synthetic stressstrain curves. Then, we compare local stress field orientations originating from stress tensor inversion of AE events with those derived in the model. To compare the observed and synthetic stress orientations, rotation angles (Kagan 2007) were used, and analysis of variance (ANOVA), a statistical tool, was used to quantify the differences. Experimental Procedure and Seismicity The triaxial test was performed on a cylindrical, intact sample of WG. The sample size was 40 9 107 mm, and it was loaded with a constant strain rate of 3 9 10 -6 s -1 . The oven-drained sample was notched 2.5 cm deep at 30°to the cylinder axis. The laboratory experiment as well as AE measurement procedure are described in detail in Kwiatek et al. (2014). International Society for Rock Mechanics (ISRM) suggests that the specimen diameter should not be less than 54 mm for the cylinder specimen. Authors would like to add that the other ISRM recommendations have been met: • the test specimens shall be right circular cylinders, • height to diameter ratio of 2.5:3.0 (2.68), • the diameter of the specimen should be related to the size of the largest grain in rock by the ratio of at least 10:1 (largest grain around 300 lm; The grain size of Westerly granite ranges from about 0.05-2.2 mm (Moore et al. 1987), so even taking into the consideration the biggest values mentioned, the diameter of analysed specimen is over 18 times bigger). First, the sample was confined to 75 MPa. Then, deviatoric loading was applied at a constant displacement rate of 3 9 10 -6 s -1 until the sample fractured (Fig. 1a). The generated fracture was complex at the top part and was composed of two major subsurfaces, as presented by the spatial distribution of the AE events (Fig. 1b). The displacements (strains) in the sample were monitored by two strain meters located directly on the sample. The sample failed at a maximum vertical stress equal to 574 MPa (Fig. 1a), when the strains in the sample reached a maximum value of 1.19 (Fig. 1a). Acoustic Emission Monitoring and MT Inversion The AE activity was monitored during the experimental test with 16 AE sensors glued to the sample surface, ensuring almost full azimuthal coverage of the seismic events. AE monitoring resulted in 14,583 events detected and located within the sample (Figs. 1b, 2-blue bars). In Fig. 1b hypocenters of AE activity recorded during the loading were presented. All dots correspond to seismic events. Additionally, moving with 50 s window, we have presented seismic events occurred at the maximum loading equal to 574 MPa with yellow color. That procedure allows for highlighting the fault plane that appeared due the applied loading. Additionally, for 6561 events, full MT inversion (FMTI) was performed (Fig. 2, light bars) following the procedure described in Kwiatek et al. (2014). For Histogram of seismic activity during triaxial compression tests (dark bars-events with the hypocentre location, light bars-events with the hypocentre location and seismic moment tensor); distribution of AE magnitudes with the occurrence time of the events (stem plot); seismic events FMTI, 14 first P wave amplitudes were used. The most seismically active period started just before sample failure. The largest AE events occurred close to the moment of fracture initiation (Fig. 2, stem plot). The blue background represents the period when the sample was in the postfailure phase after reaching the maximum compressive strength. The increasing number of seismic events was connected with increasing compression stress that was applied to the bottom plate of the rock sample, with the peak seismic activity occurring just after the sample failed. The most seismically populated volume is located close to the sample centre (Fig. 3, dark bars); the highest magnitude events are also located centrally in the sample and in the upper half (Fig. 3, stem plot). The moment tensor can represent different seismic sources and to identify its type, the MT is usually diagonalized and decomposed into elementary parts: deviatoric and isotropic. The isotropic part describes the volumetric strain component. By definition, a positive isotropic tensor is related to volumetric expansion. The deviatoric part can be additionally divided into double-couple or CLVD components; MT components are widely used in seismology for physical interpretations. Seismic source components have already been described in many papers (see, e.g., Vavrycuk 2015). The DC source is generally connected with shear-faulting. However, a CLVD source has no verified geological meaning and is often interpreted as a source component representing the residual radiation from the best DC source (Dahm and Krueger 2014). In seismology, especially in anthropogenically induced seismicity, seismic sources can have different MT compositions and can be complex (e.g., Orlecka-Sikora et al. 2014;Lizurek et al. 2015;Rudzinski et al. 2016Rudzinski et al. , 2017Lasocki et al. 2017;Lasocki and Orlecka-Sikora 2020). The vast majority of the analysed AE events have a double-couple component of the source mechanism, indicating that shear slip occurs. These are faultparallel AE events with dip directions in accordance with macroscopic slip (Kwiatek et al 2014). The authors also noticed that larger AEs contain fewer ISO components, whereas small events contain more ISO components. There are 3 times more DC events than CLVD events and almost 9 times more DC events than ISO events (Fig. 4). Stress Tensor Inversion Stress tensors were inverted from AE data using two inversion methods: MSATSI and BRTM. MSATSI (Martínez-Garzón et al. 2014a, b) provides a framework for calculating the deviatoric stress tensor together with its uncertainties using the bootstrap resampling method. The BRTM method uses the right tetrahedral method and Bayesian approach for the determination of the stress field from focal mechanism datasets and provides a probability function over the focal sphere for both the r 1 and r 3 principal stress directions (Massa et al. 2016). The WG sample was divided into nine spatial bins in which stress inversion was performed from available AE mechanisms (Fig. 5a). We considered two Fig. 3 Histogram of the seismic activity (bars of histogram) and distribution of the AE magnitude (stem plot) as a function of the sample height phases of sample deformation under the triaxial compression test: plastic and postfailure phases. The elastic phase was omitted due to the very limited number of seismic events during this phase. We consider the plastic phase as a period from the beginning of the experiment until the maximum stress of r 1 (574 MPa) is reached. The post-failure phase is the consecutive one. Generally, at least 24 event windows were chosen for stress inversion calculations. Only for one case, namely, bin 7, during the failure phase was a 17-event window chosen for the same reason as for the rejection of the elastic part. Such event windows ensure the largest number of cases (220 stress inversions) where the reliable limit of the seismic events for stress inversion was fulfilled. The number of seismic events and stress inversions performed in particular bins is presented in Table 1. Italic cells correspond to the plastic phase of the deformation of the WG sample, and the failure phase is highlighted in bold italics. green-r 1 , yellow-r 3 ) and MSATSI (marked with triangles; dark-r 1 , light-r 3 ) software in all nine bins; c variability of the dip angles of r 1 and r 3 within all bins obtained in BRTM; dark lines correspond to r 1 , whereas light lines correspond to r 3 The comparison of the trends and plunges of the maximum (r 1 ) and minimum (r 3 ) principal stresses obtained from stress inversion performed in BRTM and MSATSI software for all nine bins (the bin indicator is marked in the right corner of each plot), and just one calculation window is presented in Fig. 5b. Both inversion methods give similar results, except for bins 4 and 6. Figure 5c presents the variability of r 1 and r 3 within all bins for the whole process of triaxial loading obtained by BRTM; dark lines represent r 1 , whereas r 3 is represented with light lines. The median dip angles of the principal stresses are equal to 72.6°and 9.7°for r 1 and r 3, respectively. Numerical Modelling To answer the questions raised in the introduction, the FLAC 3D model of WG rock samples under triaxial compressive pressure was developed to model the spatial and temporal evolution of stresses and strains within the sample. Thereafter, the model was calibrated using the data from the actual laboratory experiment. This is to justify the stress changes originating from the seismic observations gathered from AE sensors and the seismic moment tensor calculations from the laboratory triaxial experiment. The calibration process in numerical modelling involves explicit fine-tuning of the geomechanical parameters in the numerical model to achieve a significant level of agreement with the experimental data (Zeh-Zon Lee 2014). Geometry and Boundary Conditions Cylindrical specimens with a circular base with a radius of 20 mm and a height of 107 mm were modelled using hexahedral-shaped zones. The rock specimen model consisted of 42,400 elements and 43,254 grid points. The boundary conditions applied to the model consist of fixed vertical displacements at the top of the sample and at the surface contacting the loading plate. The displacements in the horizontal directions are allowed, ipso facto, implementing zero friction. Generally, special low friction materials are used in practice to reduce the base and loading plate friction, which justifies the application of the previously mentioned boundary conditions. Additionally, friction added to the sample top could result in the increased strength of the sample, whose origin would be difficult to distinguish later: this could be influenced by the applied friction or by the influence of intermediate principal stress (Senent et al. 2013). Constitutive Model Rock responses to compressive loading with different phenomena: elastic deformation, microfracture initiation, plastic deformation and finally failure. The stages before the failure occurrence are characterized by the elastic response, while the post-failure of the brittle rocks, such as the granite, is characterized by massive crack propagation leading to shear bands. The mentioned behaviours are strongly influenced by the amount of confining pressure (Tan et al. 2015). There are several constitutive models since the late 1950s, when researchers started to use plastic theory in constitutive models for rock specimens. The most commonly used Mohr-Coulomb failure criterion is still evolving, combining ever newer behaviours such as cohesion softening-friction hardening and adding different parameters to control the strength degradation behaviour of the rock (Tan et al. 2015). The typical axial stress-strain curves, depending on the degradation behaviour, are presented in Fig. 6. The intact strength of the rock and the presence of joints strongly govern the strength and deformation of a jointed specimen. The constitutive model used in this study is the ubiquitin joint model with strain softening (SUBI). Strain softening was reached using cohesion softening-friction hardening behaviour. It accounts for the presence of orientation of weakness (weak plane) in a Mohr-Coulomb model. In the SUBI model, yield can occur in either the solid or along the weak plane or both (FLAC3D, Version 5.0, User's Guide). The presence of the joint is accounted for in the plastic corrections but has no effect on the elastic behaviour, and the model is restricted to one set of joints (Dehkordi 2008;Ismael and Konietzky 2017). When general failure is detected within Itasca's FLAC software and plastic corrections are applied, then new stresses are analysed for failure on the weak plane and updated accordingly (FLAC3D, Version 5.0, User's Guide). The criterion for failure on the plane, whose orientation is given, consists of a composite Mohr-Coulomb envelope with tension cut-off. The position of a stress point on the latter envelope is controlled by a nonassociated flow rule for shear failure and an associated rule for tension failure. The SUBI constitutive model was chosen due to the occurrence of the two notches created before the loading was applied to the rock sample and not modelled explicitly in the sample. The SUBI model allows for specifying both matrix and joint properties. Geomechanical Parameters Although the strain softening constitutive model allows for changes in parameter values with plastic strain, the stiffness moduli are independent of plastic strain. The majority of the published values, such as the elastic, shear and bulk moduli, density and Poisson's ratio, can be used as-is in the constitutive model, but the cohesion, friction and tensile strengths are more difficult to directly relate between the laboratory test values and modelling values. The hardening and softening parameters for this case were back-calculated from the results of the laboratory triaxial test (Tables 2, 3). Simulation of Triaxial Loading To model the triaxial loading applied to the sample, the following modelling steps were applied to the model: Step 1 Boundary conditions were applied to the model geometry. Then, the hydrostatic stress state was initialized at 75 MPa. Step 2 Confining pressure was applied to the outer faces circumfluent to the cylinder. Step 3 A constant strain rate was applied in the vertical direction at the bottom end of the cylinder, leading to an increase in axial stress until failure occurred. The model was loaded by applying a constant grid velocity of 3.21 -8 m/step (determined based on the constant strain rate 3 9 10 -6 s -1 that was obtained during experiment in the lab and the sample height). Additionally, servo controlling loading was involved in the numerical calculations. When the failure mechanism is initiated, the stress state in the sample becomes nonuniform. To better control the deformation in the system and to reduce numerical errors in the modelling, the magnitude of the strain rate can be monitored and subsequently adapted as a The FLAC3D software allows for calculating the stress distribution within the model with the accuracy specified by the user. Accuracy is provided to the model by means of densification of the mesh grid. Stress information is stored in each zone of the numerical model. The principal stress values were monitored at nine points within each of the nine bins of the WG sample: eight monitoring points were located in the bin corners, and one was centrally located. Global Stress Relations A low (0.036) RMSE was achieved for the fitting of the stress-strain curve obtained from the triaxial compression test made in the laboratory, and the curve resulting from numerical modelling (Fig. 7). The experimentally observed and numerically derived peak strengths of the sample reach 574 MPa and 584 MPa, respectively (with a difference of 1.62%). In both the experimental and numerical tests, sample failure occurred at similar axial strains, which were equal to 1.19 and 1.15, respectively (difference of 3.36%). In the numerical simulation, slight strength hardening was observed after failure. Analysing the appearance time of yielded zones during numerical loading, these zones start to become visible during the strength-hardening period, so we can assume that the failure appears at slightly greater strains. The FLAC3D software also allows the development of localization of the shear bands to be tracked by plotting zones in which plastic yielding occurred. In Fig. 8, a comparison between the failure pattern created during the experiment and numerical modelling is shown. The fracture plane generated during laboratory experiments is visibly complex, revealing a broad damage zone composed of two subsurfaces with a rough fracture surface. The numerical modelling shows the yielded zones in shear (Fig. 8b). The created shear band is similar to the failure pattern obtained from the laboratory test. Figure 8b shows the vertical displacement modelled on the fault plane reaching the maximum value of 2.53 mm. The piston displacement observed during the experimental trial reached a value of 2.52 mm at the moment of sample failure, which shows a significant convergence with the modelled value. The distributions of other parameters within the WG sample, such as the maximum, intermediate and minimum principal stresses along with maximum shear strain rate or vertical, horizontal Z and horizontal X displacement, are presented in the auxiliary materials (Appendix 1). Results and Discussion-Local Stress Relations We compared the stress orientations derived from AE data and modelled them with FLAC3D software. The comparison was made based on the 3D rotation angle (Kagan 2007) between the cardinal axes of the two stress tensors. We calculated the angles at which one principal stress coordinate system (e.g., from AE stress inversion) has to be rotated to obtain the remaining one (numerically modelled) to find where two stress fields deviate from each other the most. To infer the differences in the rotation of the stress field derived from the stress inversion based on focal mechanisms of AE data and the stress field calculated numerically, we apply analysis of variance (ANOVA). ANOVA compares mean scores with each other to detect any overall differences between the results. First, we identified the two phases as the plastic deformation phase and the after-failure phase to check if the rotations were higher before or after failure. Additionally, we introduced the already mentioned bin factor, which describes the location of the rotation scores in the rock sample. We tested for the null Fig. 8 Post-mortem cross-section of the sample. The damage zone is filled with epoxy; a The broad damage zone was composed of two sub-surfaces, b the distribution of the vertical displacement along the yielded zones, and the c not-yielded zones to present the created fracture hypothesis that there are no differences in the mean rotation values with respect to time phases or spatial bins. The alternative hypothesis states that the related mean values of rotation are not equal; at least one mean is different from another mean. We calculated the F statistics, which is the ratio of two variances, between-groups and within-groups, that are expected to be approximately the same value when the null hypothesis is true, which yields F-statistics near 1. The F-statistics calculated based on our sample are compared to the critical F-value of the F-distribution for a population where the null hypothesis is true at a certain significance level (usually the 0.05 level). We evaluated whether our sample F-value was so rare that it justified rejecting the null hypothesis for the entire population. The probability of obtaining an F-value that is at least as high as our study's value is the p value, which indicates the level of statistical significance. A low probability, low p value, indicates that our sample data are unlikely when the null hypothesis is true, and the null hypothesis is rejected. Table 4 shows the results of the ANOVA for the within-subjects effects. The p values of the test statistics F for the main effects-bin and phase-are 0.0283 and 0.0008, respectively, and both effects are statistically significant. However, the interaction between them is insignificant, since F is 1.11 and p is 0.3519. We can, therefore, conclude that there was a significant difference in the time between the mean values of rotation in the particular bins, and we ran post hoc tests to obtain more information about where the differences between the groups lie. The Tukey honest significant difference (HSD) test was used to determine if the interaction between two sets of data was statistically significant. The value of the Tukey test is given by dividing the absolute value of the difference between pairs of means and by the standard error of the mean (SE). The post hoc Tukey's test confirmed that the rotations of the 2nd bin are significantly lower than the rotations observed in other bins (except the 1st and 5th-7th bins). When we look at the phase of deformation, we see that the rotations during the failure phase become significantly higher in comparison to the plastic phase. The analysis of the interactions between the main effects provides insights into the behaviour of the stress field in the particular bins during the experiment. It is visible that the rotation of the stress field is affected by the phase of deformation for most of the bins. However, for the failure phase for the 5th bin, the stress field is significantly less rotated compared to that of the modelled one. For the last phase of deformation, when the damage of the sample appears, the rotation does not increase, contrary to the rest of the bins. From Fig. 9, it can be concluded that the largest rotation of the stress field provided by the focal mechanisms of AE events from the numerically calculated stress field is observed at the edge of bins 1 and 3 and bin 7, which do not frame the damage zone of the sample. Relatively large distances were present to the failure plane and additionally, for bins 1 and 7, the lowest number of inversions performed during both the plastic and failure phases, could result in such rotations. The lowest rotations are observed in the central bins of the sample, numbered 2 and 5, and the rotation in the upper right corner, namely, bin 9. Such rotations for bins 5 and 9 can possibly be explained by the largest number of inversions performed within these bins (more reliable results) as well as its location regarding the newly created fractures. However, not only the location to the fracture plane but also the rotations can be lower due to a more stable stress field, which is related to the offset from the model boundaries and hence the boundary conditions. The variation in the trends and plunges of r 1 in all the sample bins from the AE in comparison to the numerically modelled bins is presented in Fig. 10. Coloured circles present strikes and dips for the particular bin (the bin number is presented inside the circles) retrieved from AE data, whereas the numerically modelled data are presented with black crosses. The latest mentioned could be presented with just one point due to the same plunge of r 1 for all bins, which is additionally equal to 90°. In this case, different values of the r 1 strike do not influence the cross position. Vivid colours correspond to the plastic phase of deformation, while weathered colours present a state of failure. During the plastic deformation for the two bins, namely, the 4th and 7th bins, trends of r 1 are in the range of the 2nd (3 bins) and 3rd (1 bin) quarters of the spheroid, whereas the rest of the trends (5 bins) lay in the 1st and 2nd quarters. In the failure state, the r 1 strike for two bins lay in the 1st quarter now, with four bins in the 2nd quarter, two in the 3rd and one in the 4th. Conclusions This paper presents a detailed approach for calibrating a strain-hardening/softening ubiquitin-joint model (based on the Mohr-Coulomb model) for simulating the stress-strain behaviour of WG samples. The procedure was presented by calibrating the model for triaxial testing data. Successful calibration of global stress-strain behaviour of the numerically modelled sample with the experimental data allowed us to attempt to compare local stress tensors coming from two numerical modelling and stress tensor inversion of AE events. The calibration of the specimen shows the following results: • The bilinear ubiquitin-joint model with friction hardening/cohesion softening is capable of reproducing Westerly Granite behaviour during the triaxial fracture experiment. Additionally, the distribution of the vertical displacement along the yielded zones creating the failure plane within the Westerly Granite sample is presented in c Fig. 11 Distributions of the a vertical, b horizontal Z and c horizontal X displacements within the sample • We identify the two time levels as the plastic deformation phase and the after-failure phase. The p values of the test statistics F for the main effects-bin and phase-are statistically significant. However, the interaction between them is insignificant. We can, therefore, conclude that there was a significant difference in the time between the rotation means in the particular bins, and we ran post hoc tests to obtain more information about where the differences between the groups lie. • The largest rotation of the stress field provided by the focal mechanisms of AE events from the numerically calculated stress field is observed in the edge bins, which do not frame the damage zone of the sample. The lowest rotations are observed in the central bins of the sample. • The rotation of the stress field is affected by the phase of deformation for most bins. Rotations during the failure phase become significantly higher in comparison to the plastic phase. The post hoc test confirmed abovementioned observations. Funding This work is partially funded by Science4CleanEnergy (S4CE), a European research consortium funded by European Union's Horizon 2020 research and innovation programme under grant Agreement No. 764810. Availability of Data and Material The datasets generated numerically during the current study are available from the corresponding author on reasonable request. The authors would like to thank Grzegorz Kwiatek from Deutsche GeoForschungsZentrum (GFZ) for providing triaxial experimental data as well as for his invaluable comments and advice on this paper. Declarations Conflict of interest All authors have participated in (a) conception and design, or analysis and interpretation of the data; (b) drafting the article or revising it critically for important intellectual content; and (c) approval of the final version. This manuscript has not been submitted to, nor is under review at, another journal or other publishing venue. The authors have no affiliation with any organization with a direct or indirect financial interest in the subject matter discussed in the manuscript. Informed Consent Informed consent was obtained from all individual participants included in the study. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. References Angelier J (2002) Inversion of earthquake focal mechanisms to obtain the seismotectonic stress IV -a new method free of choice among nodal planes. Geophys J Int 150:588-609 Fig. 15 Differences in the principal stress values monitored at 9 points within each of the 9 bins within the WG sample. The colours of the curves correspond to the bin which is being compared to the 5th bin and each curve combines recordings from all 9 points within a particular bin
7,940.8
2021-12-29T00:00:00.000
[ "Geology" ]
Longitudinal Study of the Effects of Flammulina velutipes Stipe Wastes on the Cecal Microbiota of Laying Hens ABSTRACT Because antibiotics have been phased out of use in poultry feed, measures to improve intestinal health have been sought. Dietary fiber may be beneficial to intestinal health by modulating gut microbial composition, but the exact changes it induces remain unclear. In this study, we evaluated the effect of Flammulina velutipes stipe wastes (FVW) on the cecal microbiotas of laying chickens at ages spanning birth to 490 days. Using clonal sequencing and 16S rRNA high-throughput sequencing, we showed that FVW improved the microbial diversity when they under fluctuated. The evolvement of the microbiota enhanced the physiological development of laying hens. Supplementation of FVW enriched the relative abundance of Sutterella, Ruminiclostridium, Synergistes, Anaerostipes, and Rikenellaceae, strengthened the positive connection between Firmicutes and Bacteroidetes, and increased the concentration of short-chain fatty acids (SCFAs) in early life. FVW maintains gut microbiota homeostasis by regulating Th1, Th2, and Th17 balance and secretory IgA (S-IgA) level. In conclusion, we showed that FVW induces microbial changes that are potentially beneficial for intestinal immunity. IMPORTANCE Dietary fiber is popularly used in poultry farming to improve host health and metabolism. Microbial composition is known to be influenced by dietary fiber use, although the exact FVW-induced changes remain unclear. This study provided a first comparison of the effects of FVW and the most commonly used antibiotic growth promoter (flavomycin) on the cecal microbiotas of laying hens from birth to 490 days of age. We found that supplementation with FVW altered cecal microbial composition, thereby affecting the correlation network between members of the microbiota, and subsequently affecting the intestinal immune homeostasis. IMPORTANCE Dietary fiber is popularly used in poultry farming to improve host health and metabolism. Microbial composition is known to be influenced by dietary fiber use, although the exact FVW-induced changes remain unclear. This study provided a first comparison of the effects of FVW and the most commonly used antibiotic growth promoter (flavomycin) on the cecal microbiotas of laying hens from birth to 490 days of age. We found that supplementation with FVW altered cecal microbial composition, thereby affecting the correlation network between members of the microbiota, and subsequently affecting the intestinal immune homeostasis. KEYWORDS dietary fiber, Flammulina velutipes, flavomycin, microbiota, laying hens T he microbial communities inhabiting the gastrointestinal tract play an important role in nutrient digestion, energy utilization, and immune system regulation (1). These regulatory effects are mediated by the complex microbial interactions and metabolites generated by the members of the microbial community (2,3). The intestinal microbiota possesses genes encoding active enzymes of carbohydrates, which can decompose dietary fiber which is not digested by the host and produce short-chain fatty acids (SCFAs), mainly acetate, propionate, and butyrate (4). SCFAs affect host energy utilization. First, SCFAs, especially butyrate, are the energy substrates for colonic cells (5), and second, propionate is a substrate for the gluconeogenesis that can induce intestinal gluconeogenesis, signaling through the central nervous system to protect the host from diet-induced obesity and associated glucose intolerance (6). Third, acetate may help to improve metabolic health by increasing energy expenditure through whole-body fat browning (7). SCFAs can also act as signaling molecules to regulate host immunity. The combination of propionate and butyrate effectively inhibits the lipopolysaccharide (LPS)-induced inflammatory response of regulatory T cells (Treg cells) and reduces the production of inflammatory cytokines such as interleukin 6 (IL-6) and IL-12 (8). A healthy gut microbial state, characterized by a high diversity of microorganisms, improves the functional diversity as well as microbe-microbe and host-microbe interactions. It is also called an equilibrium state or steady state (9,10). In contrast, microbial imbalance can induce inflammatory responses mediated by Th1, Th2, and Th17 cells (11,12). The Th cells activated by intestinal epithelial cells induce B cells to produce and secrete antibodies on the surface of the intestinal mucosa, mainly secreted immunoglobulin A (IgA). The secretory immunoglobulin A (S-IgA) may promote the retention of beneficial members of the intestinal flora and the removal of opportunistic pathogens through different binding methods (13). Antibiotic growth promoters (AGPs) have played a decisive role in animal husbandry for more than half a century (14). Among them, flavomycin (synonyms: bambermycin, monomycin, flavophospholipol) is a typical glycolipid phosphate antibiotic that acts on Gram-positive bacteria and mainly plays a role in promoting the growth performance of chickens (15,16). However, the overuse of antibiotics has led to a rise in antimicrobial resistance. In response to this threat to public health, the European Union introduced a ban on the use of antibiotics as growth promoters in 2006 (17). Therefore, this crutch of the poultry industry must be replaced. In recent years, a growing number of scientific studies have shown positive effects of dietary fiber on chicken health and productivity (18)(19)(20). Feeding experiments were mainly carried out with insoluble fiber sources that arise as by-products during industrial production, such as oat hulls, sunflower hulls, soybean hulls, wheat bran, and wood shavings (21). Flammulina velutipes is one of the most popular edible fungi and is rich in biological nutrients (carbohydrate, dietary fiber, glycoproteins, polyphenols, etc.) (22,23). The annual output of F. velutipes in China exceeded 2.5 million tons from 2013 to 2019 (24); meanwhile, large amounts of the by-product F. velutipes stipe wastes (FVW) have also been produced. Previous studies have found that dietary FVW supplementation has no reverse regulation effect on the growth of laying hens but can increase antibody titers, enhance immune responses, promote calcium deposition in eggshells, and improve antioxidant capacity in serum and egg yolk (25)(26)(27)(28). The phylogenetic composition of the microbiota commonly found in different gut segments of broilers has been well characterized (29), whereas the literature describing this characteristic of laying hens is very limited. The long-term diet is strongly associated with the composition, activity, and dynamics of the gut microbiome, while shortterm dietary changes are often not sufficient to elicit major changes in the ecosystem (30,31). Laying hens live up to approximately 70 weeks before their laying rate decreases to about 65%. The previous studies reported the influence of microbial changes on outcome at different ages, preventing interstudy comparisons (32)(33)(34). Therefore, this study was designed to estimate the effects of FVW or flavomycin (FLA) on the cecal microbiota at different feeding stages by clonal sequencing and 16S rRNA gene analysis. The final purpose was to elucidate the regulatory effects of FVW on the microbiotas of laying hens. This study will provide a theoretical basis for using FVW as a prebiotic to maintain intestinal health of laying hens. RESULTS The diversity of the cecal microbiota in laying hens evolves with age and diet. The impact of FVW on the cecal microbiota was assessed over a long study period, 490 days, with 450 laying hens divided into five diet supplementation groups (Fig. 1). Ninety chickens were randomly divided into 3 replicates in each group and received 5 ppm FLA, low (2%) FVW (LFVW), medium (4%) FVW (MFVW), high (6%) FVW (HFVW), or basic (unsupplemented) diet (BD). Nine chickens in each group were sacrificed and cecal contents were collected at day 7 (prestarter), day 28 (starter), day 70 (grower), day 112 (developer), and day 490 (finisher) to assess longitudinal microbial development. To investigate the evolution of the cecum microbiota over time, alpha diversities were compared at the end of the five phases (prestarter, starter, grower, developer, and finisher). In general, the richness, diversity, and evenness gradually increased over time, while the richness and diversity were lower in the starter phase (P , 0.01), and this might be due to the cascading perturbations of external factors ( Fig. 2A to C). In the starter phase, richness and diversity were higher in the FVW and FLA groups than in the BD group (P , 0.01) ( Fig. 2D and E; also, see Fig. S1A to C in the supplemental material). In the grower phase, the richness was higher in the LFVW and MFVW groups than the BD group (P , 0.05) (Fig. 2D). In the developer phase, the evenness was lower in the HFVW group than the FLA group (P , 0.05) (Fig. 2F). This might indicate that FVW could improve the diversity of the gut microbiota when there are fluctuations in the gut microbiota development. The discriminant distribution between samples from different feeding phases were described by principal-component analysis (PCA). Analysis of similarities (ANOSIM) showed that flock development exerted a substantial influence on overall community variations (P = 0.0001) (Fig. 3A). In the starter, grower, and developer phases, samples from the MFVW and HFVW groups clustered separately from those of the BD and FLA groups ( Fig. 3C to E), while in the finisher phase, the cecal microbiotas of laying hens tended to be homogenous (P = 0.1567) ( Fig. 3F; Fig. S1D). Chickens receiving the BD and the FLA-supplemented diet harbored similar microbial profiles, as they clustered together in each of five phases (Fig. 3B to F). Thus, the increase in community diversity was accompanied by decreased heterogeneity of the cecal microbiota. Succession of dominant gut microbiotas in the cecum of laying hens. The succession of dominant gut microbiotas in the cecum of laying hens was traced at the taxonomic levels of phylum, class, order, family, and genus ( Fig. 4; Fig. S2). Firmicutes was the dominant phylum in the prestarter and starter phases, while Firmicutes, Bacteroidetes, and Proteobacteria were dominant in the last three phases (Fig. 4A). Clostridiales was the dominant order throughout the feeding phases of laying hens, with a relative abundance greater than 80% in the first two phases, while in the last three phases, Bacteroidales and Clostridiales were both dominant (Fig. 4B). At the family level, Ruminococcaceae were predominant in the prestarter phase. During the starter phase, Lachnospiraceae transitioned to the dominant family. During the grower phase, Bacteroidaceae replaced Lachnospiraceae as the dominant family, while the predominance of Bacteroidaceae was diluted by Eubacteriaceae and Pseudomonadaceae in developer phase. At the finisher phase, Prevotellaceae, Eubacteriaceae, Rikenellaceae, and Bacteroidaceae became the dominant families (Fig. 4C). At the genus level, Ruminococcus, Bariatricus, Intestinimonas, Pseudoflavonifractor, and Lachnoclostridium were predominant in the prestarter phase. Lachnoclostridium, Blautia, and Roseburia were dominant in the starter phase. Bacteroides, Ruminiclostridium, and Barnesiella were dominant in the grower phase, while Pseudoflavonifractor, Bacteroides, Eubacterium, and Acinetobacter were the most important in the developer phase. Finally, Alistipes, Eubacterium, and Prevotella were the dominant genera in the finisher phase (Fig. 4D). At the end of the feeding period (in the finisher phase), the linear discriminant analysis effect size (LEfSe) multilevel discriminant analysis of species differences indicated that compared to the BD group, the FLA group was differentially enriched in Ruminococcaceae, Desulfovibrio, Peptococcus, Enterorhabdus, and Faecalicoccus (Fig. 5A). In the LFVW group, Sutterella and Ruminiclostridium were differentially enriched (Fig. 5B), while Synergistes and Anaerostipes were enriched in the MFVW group (Fig. 5C) and Rikenellaceae and Synergistes in the HFVW group (Fig. 5D). FVW supplementation altered the co-occurrence network of the cecal microbiota. The pattern of interbacterial interactions among the cecal microbial communities was analyzed by constructing the co-occurrence network of each group (based on Spearman correlation). The metacommunity cooccurrence networks of the BD, FLA, LFVW, MFVW, and HFVW groups comprised, respectively, 159 edges, 154 edges, 143 edges, 159 edges, and 152 edges, representing 30 interactive genera (Fig. 6). In the FLA group, the largest numbers of positive relationships between Firmicutes and Firmicutes, Bacteroidetes, and Bacteroidetes and of negative relationships between Firmicutes and , and evenness (F) indices indicate that the relative diversity changes by supplementation. There were statistically significant differences in community structure among different supplemental groups. P values were adjusted for multiple testing using one-way ANOVA. *, P , 0.05; **, P , 0.01; ***, P , 0.001; ****, P , 0.0001. Effects of F. velutipes On Microbiota of Laying Hens mSystems Bacteroidetes were found. Meanwhile, the fewest negative correlations between Firmicutes and Firmicutes, Bacteroidetes, and Bacteroidetes and positive correlations between Firmicutes and Bacteroidetes were also found ( Fig. 6B). The long-term supplementation of FVW changed the interaction between members of the gut microbiota, as shown by the strengthening positive correlation between Firmicutes and Bacteroidetes and the weakening positive correlation between Firmicutes and Firmicutes. Among them, MFVW supplementation caused the most significant changes in microbial interactions ( Fig. 6A to E). These results might indicate that the supplementation of fiber-rich FVW affects the composition of carbon sources available to microorganisms and then changes the energy utilization among the intestinal microorganisms. FVW supplementation increased SCFA concentrations in the early life of laying hens. The microbiota in the cecum ferments undigested carbohydrates by using its own glycohydrolytic activity to produce SCFAs (35). SCFAs play a role in regulating immunity by acting on epithelial and immune cells (36). Acetate, propionate, and butyrate were maintained at a stable level throughout the feeding phases of laying hens ( Table 1). The FVW supplementation accelerated SCFA production in the early life (prestarter and starter phases) of laying hens, while FLA mainly promoted the production of SCFAs during the grower and developer phases (P , 0.05) ( Table 1). In the prestarter phase, acetate levels were higher in the MFVW (9.05 6 0.44 mmol/L) and HFVW (9.39 6 0.33 mmol/L) groups than in the BD (7.26 6 0.10 mmol/L) and FLA (6.62 6 0.04 mmol/L) groups (P , 0.5). Broadly, the highest level of propionate was found in the LFVW group. In the starter phase, acetate, propionate, and butyrate were significantly increased in the FLA and FVW supplementation groups compared to the Beta diversity relationships are summarized in two-dimensional scatterplots. Each point represents a sample, and distances between dots are representative of differences in microbiota compositions. There were statistically significant differences in community structure among different supplement groups (R > 0, P , 0.05). Effects of F. velutipes On Microbiota of Laying Hens mSystems BD group (P , 0.05). In the grower and developer phases, the FLA group showed the highest concentration of acetate and butyrate, while the HFVW group showed the lowest concentration of SCFAs in general (P , 0.05). In the finisher phase, SCFAs were significantly reduced in the FLA and FVW groups (P , 0.05) ( Table 1). FVW regulated the homeostasis of intestinal mucosal immunity in laying hens. The role of the microbiome is especially crucial in early life for the development of the immune system. A microbial imbalance could induce inflammatory responses mediated by Th1, Th2, and Th17 cells (12). Th1 and Th17 cells secrete proinflammatory cytokines, while Th2 cells secrete anti-inflammatory cytokines. The levels of the proinflammatory cytokines tumor necrosis factor alpha (TNF-a) and IL-6 were significantly reduced in the FVW groups compared with those in the BD and FLA groups during the prestarter and starter phases (P , 0.05) (Fig. S3A to D). IL-2 is known to promote the differentiation of the anti-inflammatory cytokine IL-10 (37). The level of IL-2 significantly increased in the FVW compared to the BD and FLA groups (P , 0.05) ( Fig. S3G and H). The levels of the anti-inflammatory cytokine IL-4 significantly increased in the MFVW and HFVW groups (Fig. S3I). FVW supplementation significantly increased the levels of S-IgA in the small intestinal mucosa compared to the BD and FLA groups (P , 0.05) ( Fig. S3M and N). The levels of IL-6 and IL-2 were significantly increased in the FLA group compared to the BD group ( Fig. S3C and H). The other cytokines did not change significantly between groups (Fig. S3). Therefore, FVW could regulate the Effects of F. velutipes On Microbiota of Laying Hens mSystems dynamic balance of intestinal mucosal immunity in laying hens, which might be beneficial for the homeostasis of the commensal microbiota. DISCUSSION The gut microbiota of laying hens plays a crucial role in host health and development. Dietary fiber could act as a prebiotic to regulate the intestinal health of chickens (38), but the effects of long-term dietary interventions on the gut microbiotas of laying hens were largely unknown. Therefore, our study aimed to evaluate the effects of fiber-rich Flammulina velutipes stipe waste (FVW) on the microbiota of laying hens, using FLA (flavomycin), an antibiotic growth promoter commonly used in laying hens, as a reference for 70 weeks. FVW induced beneficial changes in the cecal microbiota of laying hens. It changed the interaction network between bacteria and regulated the homeostasis of intestinal mucosal immunity. These results demonstrate that the longterm feeding of fiber-rich FVW may serve as a potential prebiotic alternative to the use of antibiotic growth promoters. FVW increased the diversity of the gut microbiota in laying hens during starter phase. A higher diversity of gut microbiota is associated with a healthier physiological state (39). The factors influencing gut microbiota diversity include host factors and Effects of F. velutipes On Microbiota of Laying Hens mSystems environmental factors, such as age and diet (40,41). Previous studies reported that dietary fiber supplementation has no significant effect on the gut microbiota diversity of laying hens during the grower and developer phases (9 to 20 weeks of age) and the finisher phase (89 weeks of age) (42,43). Furthermore, studies on the gut microbiota in the early life of laying hens are very limited. These results confirmed the conjecture of this study that the starter period is the key period for the development of the gut microbiota in laying hens and that a diet with FVW as a source of fiber could increase the diversity of the gut microbiota during this period. Supplementation of almond hulls (rich in insoluble dietary fiber) had no effect on broiler cecal microbiota diversity (44), while alfalfa (rich in fiber) increased intestinal microbiota diversity (45). The different results may be due to the fact that the fiber sources in FVW are mainly hemicellulose and cellulose. Previous studies have shown that antibiotic growth promoters Effects of F. velutipes On Microbiota of Laying Hens mSystems (flavomycin and virginiamycin) could increase the richness of the gut microbiota in broiler chickens but have no significant effect on the diversity (46), which is consistent with the results of this study showing that the subtherapeutic dose of flavomycin did not destroy the diversity of the gut microbiota. In this study, the structure of the gut microbiota showed convergence and stability with increasing age, which is in agreement with the previous results obtained with laying hens (47). However, the heterogeneity of the gut microbiota structure in broiler chickens increased over time (48). This contrasting result suggests that the farming duration and chicken type lead to differences in the structural development of the gut microbiota. Our results emphasized that the starter phase (0 to 28 days) may the optimal time for FVW to intervene and influence the microbiome. Firmicutes was the absolute dominant phylum during the first 4 weeks of the laying hens' life, in response to the high-protein feed composition during the brood stage of bone and muscle development. Bacteroidetes began to become the dominant phylum after the grower phase of laying hens in response to the increase in dietary fiber. The long-term dietary intervention led to a change in gut microbiota intestinal type (31). An increase in the abundance of Sutterella, Synergistes, Anaerostipes, Ruminiclostridium, and Rikenellaceae in the gut caused by long-term addition of FVW was also observed in this study. Sutterella is an obligately anaerobic, Gram-negative bacterium (49), and there is a negative correlation between the presence or abundance of Sutterella and the host inflammatory cytokine response (50,51). Synergistes is related to reduce gastrointestinal inflammation and enhances immune function (52). Anaerostipes is a butyrate producer, which could convert lactate and acetic acid and sugar to butyrate (53)(54)(55)(56). Ruminiclostridium is an anaerobic Gram-positive cellulolytic bacterium that produces a variety of carbohydrate-active enzymes (CAZymes) and catabolizes xyloglucan into glucose, xylose, galactose, and cellobiose (57). It includes extracellular multienzyme complexes known as fibrosomes with different specificities to enhance the degradation of cellulosic biomass (58). FLA causes enrichment of Ruminococcaceae, Desulfovibrio, Peptococcus, Enterorhabdus, and Faecalicoccus. Ruminococcaceae is a recognized SCFA-producing bacterium (59). Effects of F. velutipes On Microbiota of Laying Hens mSystems Desulfovibrio is a Gram-negative anaerobe belonging to the sulfate reducing group (60). Sulfate-reducing bacteria could use organic compounds (lactate, propionate, and butyrate) as sources of energy and carbon (61,62). The expansion of Desulfovibrio has been reported to be associated with inflammatory bowel disease, including ulcerative colitis (63)(64)(65). Peptococcus was found to be strongly positively correlated with the body weight (BW) and average daily gain (ADG) in pig culture experiments (66). In an immune experiment with mice, Peptococcus was found to be positively correlated with LPS, D-lactic acid, and TNF-a (67). Enterorhabdus is an obesity-promoting bacterium associated with diabetes and other metabolic diseases, while the overgrowth of Enterorhabdus is also a sign of ecological imbalance after antibiotic use (68)(69)(70). The enrichment of Faecalicoccus is usually associated with intestinal inflammation, including ulcerative colitis and Crohn's disease (71). These results are consistent with previous findings that dietary-fiber-rich byproducts could modulate the composition of the cecal microbiota of chickens (72)(73)(74). In addition, FVW increased the positive-relationship cluster between Firmicutes and Bacteroidetes. Previous studies have shown a strong negative relationship between Bacteroidales and Clostridiales in the cecum of chickens fed a basic diet (75). Firmicutes and Bacteroidetes represented most of the anaerobic fermentative bacteria (76), which may present nutritional competition for fermentation substrates in the cecum (77). The gut microbiome typically relies on carbohydrates as its energy source, and the gut microbes that use the same energy source occupy the same niche and form a competitive symbiotic relationship (78,79). FVW might provide abundant dietary fiber for the gut microbiota, thereby changing the interaction between members of the microbiota (from competitive symbiosis to mutualistic symbiosis). In contrast with previous studies, this study supports the intervention effect of dietary fiber supplementations on the microbe-microbe interaction (19,80,81). Subsequently, FVW increased acetate and propionate, but not butyrate. Intestinal bacteria ferment dietary fiber to produce short-chain fatty acids, which play an important role in immune regulation (82). The result is consistent with the previous report showing that insoluble dietary fiber can increase the content of acetate and propionate in the cecum (83). Bacteroidetes could ferment dietary fiber to produce acetate, isovalerate, and succinate, of which succinate is the raw material required for propionate production (84). Acetate strengthens the barrier function by mediating the signaling pathways that enable B cells and goblet cells to secrete mucins and IgA (85). Propionate could promote the development of Treg cells and reduce the expansion of inflammatory Th17 cells (86). Long-term propionate delivery in the colon improved glucose homeostasis, along with the suppression of systemic imperial inflammation (87). These results suggest that FVW may regulate the intestinal immunity by increasing acetate and propionate. The intestinal immune system must maintain a delicate balance between tolerance of commensal microbiota and immunity to pathogens, maintaining low responsiveness to the commensal microbiota at steady state (88). Previous studies have shown that supplementation with dietary fiber could reduce levels of the proinflammatory factors TNF-a, IL-1b, and IL-6 (89). In this study, FVW decreased the levels of the proinflammatory cytokines TNF-a and IL-6 and increased the levels of the anti-inflammatory cytokine IL-4. Additionally, the biomarker of the intestinal mucosa immune response is the production of secretory immunoglobulin A (S-IgA). It is the most prominent antibody present in mucosal surfaces and protects the intestinal mucosa against the invasion of enteric toxins and pathogenic microorganisms (13,90). In this study, FVW increased the levels of S-IgA in the small-intestinal mucosa. Similar to our results, dietary fiber (wheat bran and sugar beet pulp) and prebiotics (xylo-oligosaccharides and mannooligosaccharides) increased the amount of S-IgA in the small intestine (91)(92)(93). The above results further proved that FVW has the potential of prebiotics to regulate the host immunity microbes in steady state. In conclusion, we found that FVW, which is rich in dietary fiber, altered the interaction between members of the gut microbiota, regulated the balance between gut microbiota and host immunity, and kept the gut microbiota in a healthy and stable state. Against the background of an antibiotic-free culture, this study provides data supporting the development and application of Flammulina velutipes stipe wastes as potential prebiotics for laying hens. This study was a small-scale farming experiment under laboratory conditions, which could not fully simulate the factory feeding conditions and environmental stress caused by a large breeding base. Therefore, focusing on the interaction network and the competition and cooperation between members of the gut microbiota will be the next direction of our research. MATERIALS AND METHODS Experimental design and sample collection. A 490-day study assessing the impact of Flammulina velutipes stipe wastes (FVW) on the cecal microbiota of the laying chicken was performed in Animal Feeding Room, Jilin Agricultural University, Changchun, China. FVW were provided by China Changchun Xuerong Biotechnology Co., Ltd. The collected FVW were naturally dried and then transferred to a feed factory for further use (Jilin Hanhong Animal Husbandry Co., Ltd.). A total of 450 ISA brown laying chicks, purchased from a commercial hatchery, were randomly divided into 5 groups (3 replicates/group, 30 chickens/replicate): BD (basic diet), FLA (basic diet supplemented with 5 ppm flavomycin), LFVW (basic diet supplemented with 2% FVW), MFVW (basic diet supplemented with 4% FVW), and HFVW (basic diet supplemented with 6% FVW). The different groups were fed ad libitum with starter feed from 1 to 28 days, grower from 29 to 70 days, developer from 71 to 112 days, and finisher from 113 to 490 days (Fig. 1). The nutritional components of Flammulina velutipes stipe wastes are shown in Table S1. All feedings were applied according to the NRC-1994 norms and the principle of equal energy and equal nitrogen (Table 2 and Tables S2 to S5). The size of the brooding cage was 60 cm by 40 cm by 50 cm (length, width, and height, respectively), and there were 14 chickens per cage from 1 to 56 days and 7 chickens per cage from 57 to 112 days. At 112 days, the laying hens were moved to a laying cage (100 cm by 60 cm by 50 cm [length, width, and height, respectively]), with 3 chickens per cage. No veterinary treatment was required for the duration of the experiment. Nine chickens per supplementation group were sacrificed at 5 defined time points: day 7 (prestarter), day 28 (starter), day 70 (grower), day 112 (developer), and day 490 (finisher). Intestinal and cecal samples were collected in College of Life Science Building, Jilin Agricultural University. Samples were quickly flash-frozen in liquid nitrogen and stored at 280°C until further processing. PCR-DGGE and clonal sequencing. The V3 hypervariable regions of the bacterial 16S rRNA gene were amplified with the primers F338-GC (59-CGCCCGCCGCGCGCGGGGGGGCGGGGCGGGGGCAGGGGGGCCTCGG AGGCAGCAG-39) and R518 (59-ATTACCGCGGCTGCTGG-39) with a thermocycler PCR system (MG251; Thermo Scientific, China). The PCRs were conducted using the following program: 95°C predenaturation for 5 min, 30 cycles (95°C denaturation for 1 min, annealing at 60°C for 1 min, and extension at 72°C for 1 min), with a final extension at 72°C for 5 min. The PCRs were performed in triplicate using a 25-mL mixture containing 12.5 mL premix Taq mix, 10 mL sterilized ultrapure water, 0.5 mL forward primer, 0.5 mL reverse primer, and 1.5 mL of template DNA. A 230-bp DNA fragment was obtained and further analyzed by denaturing gradient gel electrophoresis (DGGE) with a denaturant gradient of 40% to 65% and a concentration of polyacrylamide gel of 8%. The gel was cut to recover clear consensus and specific bands in the DGGE map. The reamplified DNA fragment (without GC clamp) was purified, linked to the pESI-T vector, and transformed into DH5a cells. The positive clones were screened and sequenced at Sangon Biotech (Shanghai, China) Co., Ltd. 16S rRNA high-throughput sequencing. The V3-V4 hypervariable regions of the bacterial 16S rRNA gene were amplified with primers F338 (59-ACTCCTACGGGAGGCAGCAG-39) and R806 (59-GGACTA CHVGGGTWTCTAAT-39) with a thermocycler PCR system (GeneAmp 9700; ABI, USA). The PCRs were conducted using the following program: 95°C predenaturation for 3 min, 27 cycles of 95°C denaturation for 30 s, annealing at 55°C for 30 s, and extension at 72°C for 30 s, and a final extension at 72°C for 10 min. The PCRs were performed in triplicate 20-mL mixtures containing 4 mL of 5Â FastPfu buffer, 2 mL of a 2.5 mM concentration of deoxynucleoside triphosphates (dNTPs), 0.8 mL primer (5 mM), 0.4 mL FastPfu polymerase, and 10 ng of template DNA. The resulted PCR products were extracted from a 2% agarose gel, further purified by using the AxyPrep DNA gel extraction kit (Axygen Biosciences, USA), and quantified using a QuantiFluor-ST system (Promega, USA) according to the manufacturer's protocol. Purified amplicons were pooled in equimolar amounts and paired-end sequenced (2 Â 300 bp) on an Illumina MiSeq platform (Illumina, San Diego, CA, USA) according to the standard protocol by Majorbio Bio-Pharm Technology Co., Ltd. (Shanghai, China). Determination of SCFAs concentration. The experimental conditions were as follows: the chromatographic column was a DB-FFAP capillary column (30 m by 250 mm by 0.25 mm), the inlet temperature was 220°C, the flame ionization detector (FID) temperature was 250°C, the column temperature heating program consisted of an initial temperature of 65°C and then an increase to 190°C at a 20°C/min heating rate, the split ratio was 25:1, and gas flow rates were 25 mL/min for carrier gas (N 2 ), 40 mL/min for H 2 , and 400 mL/min for air. The levels of acetate, propionate, and butyrate were detected with an Agilent gas chromatograph 7890A (Agilent Technologies, USA). The standard solution (mixed standard) used was 60 mmol/L (3.60 g/L) of acetate, 50 mmol/L (3.72 g/L) of propionate, and 20 mmol/L (1.76 g/L) of butyrate. Determination of cytokines in intestinal mucosa. TNF-a, IL-6, IL-17, IL-2, IL-4, IL-10, and S-IgA in the intestinal mucosa were measured by using chicken-specific enzyme-linked immunosorbent assay (ELISA) quantitation kits (Lengton Bioscience Co. Ltd., Shang Hai, China) according to the instructions of the manufacturer. Data analyses. Quantity One v.4.6.2 was used to analyze the fingerprint of PCR-DGGE and the gray value of each band. For microbial diversity analysis, the richness index is the number of bands, the Shannon-Wiener index was calculated as 2 P [ln(ni/ P ni) Â (ni/ P ni)] = 2 P [ln (ni/ P ni) Â (ni/ P ni)] (where ni is the gray value of the band), and the evenness index was calculated as H/lnS (where H is the Shannon-Wiener index and S is the richness index). The differences in the microbiota community structures were evaluated by PCA based on Bray-Curtis dissimilarity values and performed with Canoco v.5. BLAST comparison was performed in GenBank at the NCBI website to obtain the corresponding biological classification information for the bands. For processing of data sequencing, Raw-fastq files were demultiplexed, quality filtered with Trimmomatic, and merged by FLASH with the following criteria. (i) The reads were truncated at any site receiving an average quality score of ,20 over a 50-bp sliding window. (ii) By allowing 2-nucleotide mismatches, primers were exactly matched with the removal of possessing ambiguous reads. (ii) Sequences with an overlap of more than 10 bp were mixed according to their overlap sequences. Operational taxonomic units (OTUs) were accumulated with a similarity cutoff value of 97% by using UPARSE (version 7.1 [http://drive5.com/uparse/]); chimeric sequences were elaborated and removed by using UCHIME. The 16S rRNA gene sequence taxonomy was examined by the RDP Classifier algorithm (http://rdp.cme.msu.edu/) against the SILVA 138 16S rRNA database using a confidence threshold of 70%. Networks were then constructed by using the method implemented in Cytoscape v.3.7.1. All data are expressed as means and standard deviations (SD) as determined by SPSS v. 25. The results were analyzed with one-way analysis of variance (ANOVA) and Duncan's multiple-comparison test. The differences were considered statistically significant at a P value of ,0.05. All box plots, stacked bar charts, and bar charts were drawn using GraphPad Prism v.8. Ethical approval. All procedures in this project were conducted within the ethical regulations and standards set and carried out by the Animal Care Review Committee of Jilin Agricultural University (ID: 2019-08-28-001). Data availability. Sequence data generated in this study have been made available at the Sequence Read Archive (SRA) on NCBI under project number PRJNA628749. SUPPLEMENTAL MATERIAL Supplemental material is available online only.
7,487.2
2022-12-13T00:00:00.000
[ "Biology", "Agricultural And Food Sciences" ]
A CA Hybrid of the Slow-to-Start and the Optimal Velocity Models and its Flow-Density Relation The s2s-OVCA is a cellular automaton (CA) hybrid of the optimal velocity (OV) model and the slow-to-start (s2s) model, which is introduced in the framework of the ultradiscretization method. Inverse ultradiscretization as well as the time continuous limit, which lead the s2s-OVCA to an integral-differential equation, are presented. Several traffic phases such as a free flow as well as slow flows corresponding to multiple metastable states are observed in the flow-density relations of the s2s-OVCA. Based on the properties of the stationary flow of the s2s-OVCA, the formulas for the flow-density relations are derived. Introduction Self-driven many-particle systems have provided a good microscopic point of view on the vehicle traffic [1,2]. The optimal velocity model [3] gives a description of such a system with a set of ordinary differential equations (ODE). It is a car-following model describing an adaptation to the optimal velocity that depends on the distance from the vehicle ahead. Another way of describing such systems is provided by cellular automata (CA). For example, the elementary CA of Rule 184 (ECA184) [4], the Fukui-Ishibashi (FI) model [5] and the slow-to-start (s2s) model [6] are CA describing vehicle traffic as self-driven many-particle systems. Studies of the self-driven many-particle systems have been wanting a framework that commands a bird's eye view of both ODE and CA models in a unified manner. Ultradiscretization [7], which gives a link between the KdV equation and integrable soliton CA [8], is expected to provide such a framework, for ultradiscretization can also be applied to non-integrable systems. As a first step, an ultradiscretization of the OV model [9] was presented, which lead to the s2s-OVCA [10]. The s2s-OVCA is a CA-type hybrid of the OV model and the s2s model. As we shall show in section 2, the s2s-OVCA reduces to an ODE that is an extension of the OV model in the inverse-ultradiscrete and the time-continuous limits. It was observed by numerical experiments that motion of the vehicles described by the s2s-OVCA went to stationary flow in the long run, irrespectively of the initial configuration [10,11]. It was also observed by numerical experiments that the flow-density relation for the stationary flow of the s2s-OVCA was piecewise linear and flipped-λ shaped diagram with several metastable slow branches [10]. Exact expression for the flow-density relation was given by a set of exact solutions giving stationary flows of the s2s-OVCA [11]. The flipped-λ shaped diagram captures the characteristic of observed flow-density relations [1,2]. We shall explain in section 3 the flow-density relation of the s2s-OVCA based on the properties of the stationary flow which was numerically observed [10]. s2s-OVCA and its Inverse Ultradiscretization The s2s-OVCA is given by a set of difference equations below, where the integers n 0 ≥ 0, v 0 ≥ 0 and x n k , k = 1, 2, · · · , K, are the monitoring period, the top speed and the position of the car k at the n-th discrete time. Note that the definition of the symbol min N k=0 is N min k=0 (a k ) := min(a 0 , a 1 , a 2 , · · · , a N ). The equation (1) is called an ultra-discrete equation in the sense that it is a difference equation which is piecewise linear with respect to the dependent variables x n k . The s2s-OVCA includes the ECA184 (n 0 = 0, v 0 = 1) [4], the FI model (n 0 = 0) [5] and the s2s model (n 0 = 1, v 0 = 1) [6] as its special cases. Since the second term in the right hand side of eq. (1) gives the speed of the car k at the time n, the s2s-OVCA describes many cars running on a single lane highway in one direction, which is driven by cautious drivers requiring enough headway to go on at least for n 0 time steps before they accelerate their cars. Without loss of generality, we can assume that the cars are arrayed in numerical order, x 0 1 < x 0 2 < · · · < x 0 K , which is also assumed throughout below. Then the number of empty cells between the cars k and k + 1 for any k is always non-negative, i.e., It is obvious that the inequality holds for n = 0. We assume that the inequality holds up to some n, as the induction hypothesis. The induction hypothesis as well as the definition of min assure the inequality for any k. Using equation (1), we get an expression of ∆x n k as ∆x n+1 The inequality (3) and the equation (4) show that the inequality (2) holds for n + 1. The inequality (2) means that both overtake and clash are prohibited by the s2s-OVCA. We should note that the s2s-OVCA is obtained from a difference equation by a limiting procedure named ultradiscretization [7], which generates a piecewise-linear equation from a difference equation via the limit formula, lim δx→+0 δx log N k=0 b k e a k /δx = max(a 0 , a 1 , a 2 , · · · , a N ) =: where arbitrary numbers b k must be positive. The equation (5) is rewritten as s2s-OVCA and Flow-Density Relation 3 for min(a 0 , a 1 , a 2 , · · · , a N ) = − max(−a 0 , −a 1 , −a 2 , · · · , −a N ). For the sake of convenience in the calculation below, we introduce two parameters in the s2s-OVCA, The parameters x 0 and δt are the length of a cell and the discrete time-step, respectively. Since we have shown ∆x n k − x 0 ≥ 0, the effective headway ∆ eff x n k − x 0 is also always non-negative, ∆ eff x n k − x 0 ≥ 0, for any k. With the aid of the identity, for any x > 0, which is given by the ultradiscrete which is an inverse-ultradiscretization of the optimal velocity function v u opt . Note that we have introduced arbitrary coefficients so as to make v d opt (0) = 0. In a similar way to the above calculation, an inverse-ultradiscretization of the effective interval ∆ u eff x n k is also obtained as Therefore an inverse-ultradiscretization of the us2s-OVCA is given by x n+1 k = x n k +v d opt (∆ d eff x n k )δt, which is explicitly written as In other words, the s2s-OVCA is given by the ultradiscrete limit δx → +0 of the above difference equation (10). The continuum limit δt → 0 of the above difference equation (10) goes to integral-differential equation where t 0 := n 0 δt and dx k dt = lim δt→0 In terms of an optimal velocity function and an effective distance, , the above integral-differential equation is expressed as Since the effective distance ∆ eff x k (t) goes to ∆x k (t) in the limit below, , this integral-differential equation is an extension of the Newell model [12], which is a car-following model dealing with retarded adaptation to the optimal velocity determined by the headway in the past. Replacement of t with t + t 0 in eq. and the Taylor expansion ofẋ which is equivalent to The equation of motion of the OV model is given by neglecting the higher order terms in the left hand side of the equation (13). The discussion shown above in this section shows how the inverse ultradiscretization and the time continuous limit connect the s2s model and the Newell model, which approximates the OV model, through the s2s-OVCA. Figure 1 gives typical examples of the spatio-temporal pattern showing jams and the flow-density relation of the s2s-OVCA [10]. In the numerical calculation, the periodic boundary condition is imposed and the length of the circuit L, which is the same as the number of all the cells, is fixed at L = 100. The number of the cars K is set at K = 30. The maximum velocity v 0 and the monitoring period n 0 are v 0 = 3 and n 0 = 2. Flow-Density Relation The spatio temporal pattern shows the trajectories of the cars. As we can see, irregular motion of cars is observed in the early stage of the time evolution, 0 ≤ n ≤ 30, where n is the time. But after that, the flow of the cars become stationary in the sense that length of the jam is almost constant and that cars with intermediate speeds appear only temporarily. The flows Q in the flow-density relation are computed by averaging over the time period 800 ≤ n ≤ 1000, in which the traffic is expected to be stationary in the above mentioned sense. The car density ρ := K L . As we have mentioned before, the flow-density relation of the s2s-OVCA is piecewise linear and flipped-λ shaped diagram with several metastable slow branches. The flow-density relation shown above is derived by admitting the features of the flow of the s2s-OVCA. Namely, the flow of the s2s-OVCA goes to one of the stationary flows in the long run. The stationary flows consist of the free flow in which all the cars run at the top speed v 0 and the slow flows that always contain slow cars running at the minimum speed v ∞ min , 0 ≤ v ∞ min < v 0 , which remains constant. Formation of the line of slow cars corresponds to that of traffic jam. In the slow flows, lengths of the jams are almost constant and fluctuate periodically. Our previous paper [11] gives a set of such stationary flows. Note that the number 0 at the leftmost shows the time. The digits and the blank symbols in the above configuration mean the indices of the cars and the empty cells, respectively. Thus the number of the cars K is 10 and the length of the circuit L is 38 in this case. We set the monitoring period n 0 at 2. The speed of the cars 4, 9 and 0 is 3, which is the top speed v 0 of this case. The speed of the car 5 is 2, whose headway is also 2. All the other cars' speeds are 1, whose headways are also 1 except for the car 3. Thus the headways of the cars in tha past have nothing to do with the motion of the cars in the future except for the car 3. The headway of the car 3 at the time −1 is set to be 1. Out of the above initial configuration (15), the equation (1) Note that the minimum speed of the cars v ∞ min is 1 in the flow above. We notice that the configuration at the time 3 is obtained by moving all the cells of the initial configuration one cell rightward as well as changing the car indices k to k − 1 modulo 10. The configuration at the time 6 is also obtained by doing the same shifts and changes of car indices to the configuration at the time 3. In this sense, the above flow is a periodic motion of cars whose period is 3 in this case. The length of the jam, or the number of the cars running at the minimum speed, is thus almost constant. Intermediate speeds also appear only temporarily. That is why we call them stationary flows of the s2s-OVCA. Roughly speaking, the slow flows we shall deal with is the stationary flow of the type shown above. The density of the cars ρ and the average flow Q over the period, or the n 0 + 1 = 3 steps, are calculated as v n ′ k = 6 + 3 + 3 + 5 + 9 + 4 + 3 + 3 + 3 + 9 3 × 38 = 8 19 , which will be verified with the formula we shall derive shortly. Let us consider such slow flows as we have seen above as the specific solutions in a more general manner. Figure 3 shows configurations of a slow flow at times n and n + n 0 + 1. Since Two cells are the same under PBC. we employ the periodic boundary condition, two cells containing the car K are identified. As a property of the slow flow, we assume that the slow flow is periodic in the sense that the configuration at the time n + n 0 + 1 in the box is given by the rightward displacement of the entire configuration at the time n in the box by n 0 v ∞ min − 1 cells. The flow provided by this displacement of the entire configuration in n 0 + 1 time steps is For example, the rightward displacement mentioned above for the slow flow in fig. 3 is 2 × 1 − 1 = 1, which agrees with the observation before. The set of stationary flows given in [11] has the property of the slow flow we here assume. Here we should note that the leftward displacement of the car K by L cells, namely whole the circuit length, which is fictitiously introduced to make the shifted initial configuration from the real configuration at the time n 0 + 1 in the sense that the numerical order of the car arrays is maintained. In order to compensate the underestimation of the flow brought about by this leftward displacement, we have to add the flow corresponding to the rightward displacement of the car K by L cells in n 0 + 1 time steps, 1 (n 0 +1)L · L = 1 n 0 +1 . Thus the flow of the slow flow with the minimum speed v ∞ min is given by For example, substitution of ρ = 5 19 , n 0 = 2 and v ∞ min = 1 into eq. (16) yields which agrees with the flow Q = 8 19 for the slow flow given above as an specific solution. The formula (16) agrees with the flow-density relation given by numerical experiments, as we can see in fig. 2. Three branches labeled with v = 2, 1 and 0 are the flow-density relations with the minimum speeds v ∞ min = v in fig. 2. The maximum density ρ max (v ∞ min ) that allows the minimum speed to be v ∞ min is The flow Q(ρ max (v ∞ min )) corresponding to the maximum density ρ max (v ∞ min ) is then given by Since the two equations (17) and (18) holds at the same time, they leads to Q(ρ max (v ∞ min )) + ρ max (v ∞ min ) = 1. Thus all the end points of the branches must be on the line The branching point, or the minimum density, of the flow-density relation of the slow flow corresponding to the minimum speed v ∞ min is determined by the intersection of the flow density relations of the free flow and the slow flow, In fig. 2, the branching points corresponding to v ∞ min = 2, 1 and 0 are encircled with small circles, which agree with the above formula (20). The density of the cars ρ needs to be sufficiently large so as to form the slow flow with the minimum speed v ∞ min . The branching point gives the lower bound of such density. Summary We have shown an inverse ultradiscretization from the s2s-OVCA (1) to an integral-differential equation (11), which is an extension of the Newell model (2). Since the Newell model [12] and the s2s-OVCA [10] are extended models of the OV [3] and the s2s models [6] respectively, the s2s-OVCA is interpreted as a CA-type hybrid of the OV and the s2s models. Using the features of the stationary flows observed in the numerical experiments, we have derived the flow-density relations of the stationary flow of the s2s-OVCA. The flow-density relations of the s2s-OVCA were numerically obtained [10] and then derived by use of a set of stationary flows [11]. The s2s-OVCA has several types of monotonicity in its time evolution, which extend the results shown for the n 0 = 1 case [13]. We expect that the monotonicity determines the relaxation to the stationary flow from the initial configuration as well as the property of the stationary flow we assume here. We hope that results on the relaxation to stationary flows and the monotonicity in the time evolution of the s2s-OVCA will be reported soon.
3,921.4
2015-03-30T00:00:00.000
[ "Physics" ]
Effect of a Circular Ring on the Side Force of a Cone – Cylinder Body A pointed cylindrical nose shape finds its application in different aerospace vehicles (e. g. a fighter aircraft, tactical missiles, etc.). Depending on the requirement, the nose can be of various shapes, such as conical, ogival or of blunted tip. During the maneuvers, these aerospace vehicles experience different angle of attack regimes. In general, for conical noses, at an angle of α < 15o, the oncoming flow detaches and curls up into a symmetrical vortex pair along the body. Due to the symmetric nature of the vortices on the leeward side, the body under consideration experiences very lower/negligible side force, while the pressure distribution about the mid vertical plane remains almost symmetric. The angle at which the side force is established may vary for different nose shapes and flow conditions. For instance, at 15o < α < 60o, the vortex pair appears asymmetric in a cross-plane, as shown in Fig. 1. The Figure shows the primary and secondary separations (ØS1 and ØS2), and the primary and secondary attachments (ØA1 and ØA2), where Ø is the roll angle. It is clearly observed that the right vortex is larger than the left one. This creates a difference in the pressure distribution about the vertical plane of symmetry, leading to the generation of the side force (Allen and Perkins 1951; Lamont and Hunt 1976; Keener et al. 1977; Hunt and Dexter 1979; Ericsson and Reding 1980; Dexter and Hunt 1981; Lamont 1982; Zilliac et al. 1991; Pidd and Smith 1991; Liu P and Deng 2003; Xuashi et al. 2009; Kumar and Prasad 2016b). The flow, after encountering the body, has been seen to roll up into a pair of vortices, which lift further in the downstream, resulting in the generation of another vortex, beneath the previous one. Due to the tip perturbation, one of the vortices tends to lift earlier. Hence, a multi-vortex system is formed, that appears to be arranged alternately on the leeward side of the body. Because of such a vortex system, the flow pattern in different cross-planes appears to be asymmetric. With further increase in the α beyond 60o, the dominance of the global https://doi.org/10.5028/jatm.v12.1096 ORIGINAL PAPER INTRODUCTION A pointed cylindrical nose shape finds its application in different aerospace vehicles (e. g. a fighter aircraft, tactical missiles, etc.). Depending on the requirement, the nose can be of various shapes, such as conical, ogival or of blunted tip. During the maneuvers, these aerospace vehicles experience different angle of attack regimes. In general, for conical noses, at an angle of α < 15 o , the oncoming flow detaches and curls up into a symmetrical vortex pair along the body. Due to the symmetric nature of the vortices on the leeward side, the body under consideration experiences very lower/negligible side force, while the pressure distribution about the mid vertical plane remains almost symmetric. The angle at which the side force is established may vary for different nose shapes and flow conditions. For instance, at 15 o < α < 60 o , the vortex pair appears asymmetric in a cross-plane, as shown in Fig. 1. The Figure shows the primary and secondary separations (Ø S1 and Ø S2 ), and the primary and secondary attachments (Ø A1 and Ø A2 ), where Ø is the roll angle. It is clearly observed that the right vortex is larger than the left one. This creates a difference in the pressure distribution about the vertical plane of symmetry, leading to the generation of the side force (Allen and Perkins 1951;Lamont and Hunt 1976;Keener et al. 1977;Hunt and Dexter 1979;Ericsson and Reding 1980;Dexter and Hunt 1981;Lamont 1982;Zilliac et al. 1991;Pidd and Smith 1991;Liu P and Deng 2003;Xuashi et al. 2009;Kumar and Prasad 2016b). The flow, after encountering the body, has been seen to roll up into a pair of vortices, which lift further in the downstream, resulting in the generation of another vortex, beneath the previous one. Due to the tip perturbation, one of the vortices tends to lift earlier. Hence, a multi-vortex system is formed, that appears to be arranged alternately on the leeward side of the body. Because of such a vortex system, the flow pattern in different cross-planes appears to be asymmetric. With further increase in the α beyond 60 o , the dominance of the global 02/26 instability increases, and the flow appears to be similar to that of an inclined cylinder in a cross-flow. The phenomena of vortex shedding starts to dominate the flow, due to which the time averaged side force reduces drastically. In the angle of attack ranging from 20 o to 60 o , the extent of the side force and its direction are found to be highly dependent upon nose geometry, nose apex angles, angle of attack (α), roll angle (Ø), Reynolds number, slenderness ratio, etc. A small nose apex angle may induce the side force even at a very low angle of attack. Highly maneuverable missiles, frontal portion of fighter aircrafts, protruding probes, external loads, etc., often have pointed forebody structure, and these may encounter low Reynolds number during maneuvers. Hence, many investigations have been made in the last few decades adopting experiments and computations to understand and alleviate the existing side force on the slendervehicle at a higher αangle of attack and a low Reynolds number. Moreover, studies at very low speeds also enable the researchers to have a basic understanding on the flow behavior over a pointed nose slender-body at higher angles of attack. This problem was first identified and reported by Allen and Perkins (1951). An extensive study was done by Lamont and Hunt (1976) on a slender-body to understand the behavior of the flow at different angles of attack. The side force on the body was observed to be a function of the nose shape and the Reynolds number. Studies made by Keener et al. (1977) indicated that the side force was highly dependent upon the nose tip only. Interestingly, said study also revealed that, at a higher α, the side loads could be as huge as 1.5 times the normal force. Ericsson and Reding (1980) reviewed the studies made earlier at different ranges of the Reynolds number, aiming to re-examine the maximum normalized force, the self-induced coning motion of the body due to the vortices, and laminar and turbulent separation on the slender-body at higher angles of attack. Hunt and Dexter (1979) expected that a very low freestream turbulence level could alleviate the existence of side force at a high angle of attack, reason why the experiments were also performed in a wind tunnel with a streamwise turbulence of 0.01%. However, there was no improvement in the results. Experiments performed by Lamont (1982) indicated that the side force reduced substantially in the transitional flow regime. Zilliac et al. (1991) conducted the experiments on an ogivenosed slender-body at a Reynolds number diameter of 30,000, and showed the "bi-stable state" of side force in the roll angle range of 45 o to 55 o . It is an established fact that the conical nose shapes experience a lesser axial force in comparison to the other nose shapes. However, at higher angles of attack, the conical nose may induce a very large side force (Pidd and Smith 1991;Liu P and Deng 2003). Experiments and computations conducted over an ogive-nosed slender-body at a Reynolds number of 29,000 by Kumar and Prasad (2016b) showed that the existence of the side force is mainly due to the establishment of a multi-vortex system arranged alternately in the leeward of the body. Due to the alternate arrangement of the vortices, the flow appears to be asymmetric in the different cross-planes. The vortices grow and lift in the downstream, under the influence of the adverse pressure gradients. Computations of such flow fields at a high α are very challenging (Cummings et al., 2003). Achieving asymmetry in the leeward flow without any artificial disturbance is difficult. Degani and Schiff (1991) used an artificial disturbance at the nose tip to produce the asymmetry in the flow, which indicated good agreement with the experimental results. Several other techniques to produce the asymmetry in the vortices have been reported (Degani and Levy 1992;Xiaorong et al. 2009;Lim et al. 2009). The existing side force on the aerospace vehicles may prove to be highly detrimental to the vehicle structure and its stability. Efforts have been made in the past to reduce the side force and the induced moments. An effective method of reducing the side force at a Reynolds number of 35000 based on diameter using helical grooves and trips is reported by Lua et al. (2000). The mechanism behind the reduction in the side force is discussed by Kumar and Prasad(2016b). Leu et al. (2005) showed the decrease in the side force using microballoon actuators over a cone-cylinder body. In recent times, experiments and computations carried out by Kumar and Prasad (2017) showed that the use of rings reduces the side force over an ogive-cylinder body. Based on the literature, it is evident that slender-body with conical nose has the advantage of lower axial force and drag in comparison to the other nose shapes. However, the side force generated on the conical forebodies at higher angles of attack is massive. In the present investigations, experiments and computations have been made on a slender-body with the conical nose at a Reynolds diameter of 34,000. Investigations on similar Reynolds number are reported in the literature and have been previously discussed. Based on the results reported by Kumar and Prasad (2016b;2017) and Lua et al. (2000) for ogive-nosed slenderbodies, a rectangular cross-sectioned ring reduced the side force at different α for the case of a cone-cylinder body, type of body expected to have lesser axial force at lower angles of attack, which, in turn, can reduce the drag. However, the flow can be extremely complicated over a cone-cylinder body during maneuvers when it experiences high angles of attack and low Reynolds number. As indicated in the literature, the side force is highly sensitive to the nose shape. The investigation presented here aims to understand the suitability of the rectangular cross-sectioned ring to reduce the existing side force over a cone-cylinder body, given that studies on this type of bodies are very limited in the reported literature. EXPERIMENTAL TECHNIQUES Experiments have been performed in the Subsonic Wind Tunnel, which has a test section size of 2ft × 2ft and a turbulence intensity lower than 0.5%. The freestream velocity (U ∞ ) was kept as 20 m/s, corresponding to a Reynolds number of 34,000, considering the base diameter (D) of the model. The angle of attack ranged from 0 o to 50 o . The model used in the present investigation had a conical nose shape with a fineness ratio of approximately 3, and a semi-apex angle of 10 o . The geometry of the nose was kept similar to the model used by Xuashi et al. (2009). The base diameter and the overall length of the body (L) were kept as 25 mm and 250 mm, respectively, leading to an L/D ratio of 10. A PC controlled incidence setup was used to change the angle of attack. A three-axis calibrated inclinometer was used to fix the desired α. All the experiments were performed with and without a rectangular cross-sectioned circular ring. Two rings of different sizes were fabricated, one having a height of 2% of the local diameter at X/D ≈ 1, where X stands for the length from the tim; and the other having a height of 5% of the local diameter at X/D ≈ 2.5. The ring heights and locations were chosen so that the ratio of the height of the ring from body surface (h) to the local diameter (d) of the cone is a function of the axial distance. In the present case, h/d at X/D ≈ 2.5 is around 2.5 times of h/d at X/D ≈ 1. This was based on the results reported in Kumar and Prasad (2016b), which indicated that a smaller disturbance in the downstream will have lesser effect on the flow over a slender-body at high angles of attack. Two different models were made for the measurement of the forces and surface flow visualizations. The details of the model and strain gage balance used for the force measurement are displayed in Fig. 2. The internal six component strain gage balance had a diameter of 10 mm, which was suitably fixed inside the test model. The axial force was considered positive in +X -direction, normal force was positive upwards in +Z -direction and the side force was positive in +Y-direction, as depicted in the coordinate system of Fig. 2. The strain gage was fixed in a way so that it was an integral part of the overall length of the model. It was ensured that the surface remained smooth at the joints, allowing the flow field to stay unaffected. Negligible vibration in the model was observed during the test runs. A 3V DC power supply having a good signal to noise ratio was used for exciting the strain gage balance. The data were obtained using a PC-based data acquisition system and signal conditioner. A low pass filter with a cut off frequency of 10 Hz was used for all the force measurements (Kumar and Prasad 2016b). 02/26 04/26 It is well-known fact that the side force on the slender-body is highly dependent upon the roll angles (Keener et al. 1977;Hunt and Dexter 1979;Ericsson and Reding 1980;Dexter and Hunt;Lamont 1982;Zilliac et al. 1991) and exhibits a bistable state with the variation in the roll angles at high angles of attack. Therefore, it was decided to fix the roll angle of the test model for the measurement of the forces. Based on the results reported by Kumar and Prasad (2016b), this angle was fixed randomly, although it was ensured that there was no variation on it during the experiments, since any variation in the roll angle changes the flow field entirely due to the fact that the change in the roll angle changes, in turn, the orientation of the surface roughness in the azimuth, which mainly governs the asymmetry of the vortex and, hence, the side force. Surface flow visualization was made using the oil flow technique. The details of the model used for oil flow and incidence mechanism are shown in Fig COMPUTATION Incompressible, three-dimensional, time dependent computations were made at different α, using the commercial software ANSYS 17.0. It uses the finite volume approach for solving the incompressible Reynolds averaged Navier-Stokes equation. Segregated, implicit schemes were used in the present computations. A second order discretization scheme was employed for temporal, spatial, and turbulence quantities. The computational domain was chosen to be spherical ( Fig. 4) as it yielded better residuals in comparison to other domains (Kumar and Prasad 2016b). The outer boundary was kept at a distance of 50 times from the center of the body. A freestream velocity of 20 m/s and a turbulent viscosity ratio of 2 were enforced at the inlet boundary in X -direction. One half of the spherical domain was kept as inlet and the other half was kept as outflow, where the required information was extrapolated from the interior. The choice of the spherical domain was also based on the CFD work, carried out by several researchers on two-dimensional circular cylinders, which yields reasonably good results with circular domain. Velocity Inlet Out ow Model (Wall) No slip condition was employed on the wall surfaces. Based on the literature, it is essential to provide an artificial microperturbation at the nose tip, in order to bring the asymmetry in the flow. The artificial micro-perturbations are the representation of the micro-surface roughness/perturbation at the nose, which is believed to be the cause of vortex asymmetry. It is required that shape, size, and locations of the artificial perturbation should be fixed in such a way so that the computational results become comparable with the experiments. Figure 5 shows the micro-perturbation having a maximum height of 0.004 D, length of 0.02 D, and width of 0.004 D at a location of θ = 225 o . These shape and size were achieved after several attempts. 02/26 06/26 To match the experiments, it is desirable to have different tip perturbation for different angles of attack, otherwise it will not be possible to have a reasonable agreement between computations and experiments. However, in the present study, the perturbation geometry and location have been kept fixed. Due to this, differences in the computations and experiments have been observed at several angles of attack. Nevertheless, computations performed at different angles of attack will help in better understanding and analysing the flow physics. The convergence history of the side force without a ring at α = 40 o is displayed in Fig. 6, which indicates that the flow seems to be in a quasi-steady state. Similar observations were also made by Degani and Schiff (1991), and Degani and Levy(1992). Based on the results reported by several researchers (Kumar and Nair 2013;Champigny et al. 2006;Cummings et al. 2003), the Spalart-Allmaras turbulence model was chosen, which was also found to be suitable for massively detached flow. Gird independence test was carried out using different grid sizes of Grid 1 (0.6 million), Grid 2 (1.1 million), and Grid 3(1.8 million) at α = 35 o . Table 1 shows the results of the grid independence test, which indicate that changes in the grids did not affect the side force significantly for the cone-cylinder body. The side force obtained using the present tip micro-perturbation indicated the nearest value with the experiments. Hence, Grid 3 was chosen for further computations and to capture the finer details as well. Figure 7 brings the comparison of the present computation and experimental pressure distribution (Xuashi et al., 2009) obtained at X/D = 0.834, which indicates reasonably good agreement. The differences observed in the experimental and static pressure distribution are likely to be due to the grids or turbulence model. However, the obtained results will be helpful in the basic understanding of the flow features over a cone-cylinder at higher angles of attack. RESULTS AND DISCUSSIONS WITHOUT RING Figure 8 presents the comparison of the measured time-averaged side force on the cone-cylinder and ogive-cylinder models at different angles of attack. The presented results of ogive-cylinder model hve been reported in Kumar and Prasad (2016b). It is clearly observed that the side force for cone-cylinder bodies initiates much early in comparison to ogive-cylinder models. In the second place, in ogive-cylinder model, it is observed that the side force keeps on increasing with the increase in the angle of attack. However, the side force behavior is somewhat random for cone-cylinder bodies, which clearly indicates the complicacies involved with the use of this type of model. It is observed that the side force in case of cone-cylinder starts increasing from α ≈ 15 o . 02/26 08/26 With the increase in the angle of attack, the side force coefficient (C yo ) increases and reaches a maximum value of -3.93 at α ≈ 45 o . The measured overall side force was observed to fluctuate at different angles of attack. The presence of the asymmetric vortex is the main reason behind the increase in the side force. As discussed in the beginning, due to the asymmetric vortex differences in the pressures across the mid vertical plane is observed which leads to the generation of the local side force. The local side forces established all along the length of the body leads to the overall side force of the body. This clearly indicates the complexity involved in the flow of cone-cylinder configuration in comparison to the other nose geometries (e.g. ogive, blunted, etc.), where such a fluctuation in the side force is rarely observed. It is to be noted that for the present configuration the maximum side force obtained at α ≈ 45 o was around 50% of the lift coefficient (C L ) at that angle of attack. Controlling such a huge side force using the conventional control surface is very difficult as the control surfaces itself remain in the wake of the body. To obtain the surface flow features such as flow separation, attachments, etc., surface flow visualization using oil flow technique was carried out on the cone-cylinder body. Figure 9 shows the surface flow pattern on the leeward side (θ = 180 o ) of the body at α ≈ 45 o . It could be observed that the asymmetry in the flow starts from a very initial stage which is represented by the separation lines along the body. In the downstream, the flow separation is represented by the coagulated oil mixture (visible as a line). Movement in the separation lines was observed along the length of the body. This clearly indicates the complexity involved in the flow at that angle of attack. Figure Computations were made on the cone-cylinder body at higher angles of attack. It is a known fact that computations made on the slender-body without a tip perturbation or any other disturbances do not yield the vortex asymmetry (Kumar and Prasad 2016b;Degani aand Schiff 1991;Degani and Levy 1992). The location, shape, size etc., of the disturbance is required to be fixed in a way so that the computational results match with that of the experimental results. However, it is very difficult to obtain a good agreement between computations and experiments using the same perturbation geometry and location at different angles of attack. In the present study only, a single geometry of perturbation and its location has been fixed. It might not yield a good agreement with the experiments, but the results obtained from the computations will help in a better qualitative analysis of the flow. Figure 12 shows the comparison of the side force obtained through experiments and computations. In the case of cone-cylinders the problem becomes more complicated and it is difficult to have a good agreement with experiments at different angles of attack. However, the obtained result will help in the physical interpretation of the flow. A reasonable agreement of the overall side force at α = 35 o and 50 o was observed. But a major difference in the side force was observed at α = 45 o . This difference could be due to several reasons such as insufficient grids, turbulence model, perturbation locations, etc. Nevertheless, the existence of the side force itself at different angles of attack is sufficient for the understanding of the flow phenomenon. A rise in the magnitude of the side force was observed with increasing angles of attack. Figure 13 shows the static pressure distribution circumferentially on the cone-cylinder body at α = 35 o , 40 o , 45 o and 50 o . Asymmetry in the pressure distribution at α = 35 o and X/D = 2 is clearly observed. This 1/26 11/26 is mainly due to the differences in the vortices on the leeward side of the body. Towards the downstream, the magnitude of the pressure reduces which confirms the lift of the vortices. With further increase in the angle of attack, it is observed that the magnitude of pressure increases at X/D = 2. This confirms that the vortices become stronger and dominating with increasing angle of attack. However, at X/D = 4, the magnitude of the pressure with increasing angle of attack starts to reduce. It is mainly because the increase in the angle of attack shifts the vortex lift location upstream. At X/D = 6 and 8, increase in the static pressure is likely due to the growth of another vortex system behind the initial vortex. These phenomena could be well understood using the vorticity magnitude contour shown in Fig. 14 appear asymmetric in different cross-flow planes along the body. At X/D = 2, the vortex in the left remains attached to the body while the vortex in the right is on the verge of separation. The vortex in the left grows and lifts in the downstream. At X/D = 8, the growth of another vortex is observed after the separation of initial vortex system. With the increase in the angle of attack, the vortices tend to squeeze upstream leading to several vortices on the leeward side. Similar observations are also reported by Kumar and Prasad (2016b) for the case of Ogive-cylinder configurations. Figure 15 shows the overall vorticity magnitude contour at different angles of attack. It is evident that with an increase in the angle of attack the vortices tend to move vortex lift location upstream. A multi vortex system is also observed at higher angles of attack of 45 o and 50 o . Based on the experimental and computational results it is definite that disturbances if properly introduced on the cone-cylinder body will affect the flow and hence the side force on the body. Similar observations were made by Kumar and Prasad (2016b) for the case of ogive -cylinder configuration where a suitably placed circular ring was used to reduce the side force. WITH RING In the present investigations, a single ring was used for two different cases. As discussed earlier, it was ensured that for all the experiments conducted using ring, the roll orientation (Ø) of the model was kept same as done for the case of no ring. This is necessary as any change in the roll angle will vary the flow physics (Lamont and Hunt 1976;Keener et al. 1977;Hunt and Dexter 1979;Ericsson and Reding 1980;Dexter and Hunt;Lamont 1982;Zilliac et al. 1991) and hence the side force. And therefore, justifying the applicability of a control technique to reduce the side force without fixing the roll orientation of the slender-body will be inappropriate. In the first case, the height of the ring was kept as 2% of the local diameter placed at X/D ≈ 1 and in the second case, the height of the ring was kept as 5% of the local diameter placed at X/D ≈ 2.5. The ring with height of 2% of the local diameter was kept to understand the behavior of the side force in presence of a very small perturbation. While the case of a ring having a height of 5% of the local diameter was a larger disturbance. Figure 16 shows the measured side force on the cone-cylinder body at different angles of attack for the case of with and without rings. The results clearly indicate that use of a small ring (2% of the local diameter) very near to the tip reduces the 1/26 13/26 side force in the angle of attack range of 15 o to 35 o . However, no significant change in the side force was observed in the angle of attack range of 35 o to 45 o . Surprisingly, at angles of attack greater than 45 o , the side force increased drastically in the opposite direction. It indicates that the presence of small height ring at locations near to the tip is beneficial at lower angles of attack only. At higher angles of attack, the flow is severely affected due to the ring which acts like a perturbation at the tip itself, leading to a very large side force. For the larger ring (5%) placed at X/D = 2.5, change in the direction of the side force was observed at low angles of attack itself. Further increase in the angle of attack indicated that the side force remained in the same direction which was not observed in the case of 2% ring. With the use of 5% ring, a significant reduction in the magnitude of the side force was observed at several angles of attack, however, the direction of the side force was reversed. From Fig. 16, it is indicative that the use of any ring on the cone-cylinder body exhibits a considerable change in the flow physics of the body that leads to the changes in the side force. Similar observations were also made in the case of ogivecylinder (Kumar and Prasad 2016b). However, changes in the direction of side force were observed only at very high angles of attack (α > 40 o ) which is not observed in the case of cone-cylinder. Without ring With ring at X/D ≈ 1 With ring at X/D ≈ 2. To have a more detailed physical interpretation of the flow over cone-cylinder configuration with 2% ring at X/D ≈ 1, the surface flow visualizations were made. Figure 17 shows the oil flow visualizations made at different angles of attack using the 2% ring. At α = 35 o , the comparison of the oil flows made for the case of no ring, 2% and 5% ring clearly indicate that for the case of no ring and 2% ring, the initial vortices remain almost unaffected which is identified from the separation lines. Since the contribution of the initial vortices is maximum towards the overall side force hence no significant changes were observed using 2% ring. However, with the 5% ring placed at X/D ≈ 2.5, the initial vortex system was disturbed and their effect propagated in the downstream. This led to the changes in the direction and magnitude of the side force (Fig. 16) for the case of 5% ring. Figure 18 shows the effect of angle of attack on the cone-cylinder configuration with 2% ring. It is evident from the Figs. that initial vortex system at an angle of attack of 35 o and 40 o is not significantly disturbed with the use of 2% ring which can be identified from the similar pattern of the separation lines. Due to this, no major variation in the overall side force is observed at these angles of attack. However, at α = 45 o , the initial vortical structures are affected due to the ring. This is mainly because at high angles of attack, the vortices get more squeezed towards the tip, leading to the asymmetric vortical structure along the body. The disturbance brought in the initial vortex affects the downstream vortex as well and hence variation in the overall side force was observed. With further increase in the angle of attack to 50 o , the flow experiences more disturbance. Comparison between the oil flows carried out at α = 50 o for the case of no ring and 2% ring in Fig. 18 clearly shows the difference. Such variation in the flow behavior affects the side force drastically which was observed in the case of force measurement (Fig. 16). Figure 19 shows the oil flow visualizations made on the cone-cylinder configuration with 5% ring placed X/D ≈ 2.5. In comparison to the case of no ring, it is clearly observed that the initial vortex 1/26 15/26 system grows and gets disturbed by the ring placed at X/D ≈ 2.5. It could be observed from the separation lines that the flow on the leeward side is asymmetric. The degree of asymmetry increases with increasing angle of attack. Due to the presence of 5% ring at X/D ≈ 2.5, the growth of the vortices is disturbed at all the angles of attack. At α = 35 o , a pair of new vortex seems to emerge after the ring which dominates the flow in the downstream. At higher angles of attack (α > 35 o ), changes in the vortex pattern in the downstream is clearly observed. As mentioned previously, the increase in the angle of attack of the slender-body forces the vortical structures to get squeezed towards the tip. Due to these, a multiple vortex patterns get established in the leeward side of the body. Any disturbance (such as ring) in the in the initial portion of the body will affect the flow in the downstream which is observed in the oil flow visualization. Further, the changes in the flow pattern is also governed by the shape, size, and location of the ring. As mentioned previously, the results obtained through computations are qualitative and are being used in the present study for better interpretation of flow for the case of with and without ring. The computational studies also showed somewhat similar result as observed in the case of experiments. Figures 20 to 23 shows the circumferential pressure distribution at different axial location and angles of attack. At α = 35 o , it is clear that the presence of 2% ring didn't produce any significant effect on the pressure distribution and hence no major variation in the side force was observed. However, the use of 5% ring at X/D ≈ 2.5 clearly obstructs the flow leading to a sudden decrease in the suction pressure in the downstream of the ring. A similar observation was made at all the angle of attack. At α = 50 o , the 2% ring altered the flow in the initial portion (X/D = 2) of the body itself which continued in the downstream as well. Due to this, the differences in the circumferential pressure distribution is observed at all the axial locations in comparison to the case of no ring. This leads to a major change in the overall side force observed in the experiments (Fig. 16). Figure 24 shows the contours of vorticity magnitude at the different axial location for the case of with and without a ring at α = 35 o . It is observed that at X/D = 2, the disturbance brought about in the flow with the use 2% ring at X/D ≈ 1 is quite lesser. in the overall side force with the use of 2% ring. On the other hand, use of 5% ring at X/D ≈ 2.5 alters the flow in the downstream as well which could be clearly observed at X/D = 4, 6 and 8. This is the reason for the change in the direction and magnitude of the side force using 5% ring. A clear shift in the vortex asymmetry is observed at these locations. The overall vorticity magnitude contour on the body is shown in Fig. 25. formation severely (Fig. 28). It was observed that at X/D = 4 and 6, the vortex asymmetry pattern had reversed in comparison to the case of no ring. This could be the reason for the reduction of the side force observed in the experiments. However, in the 5% ring, it was observed that the vortex asymmetry pattern was a reverse image of the case of no ring in different cross-flow planes. This clearly indicates the change in the direction of the side force by using 5% ring. The differences in the three-dimensional vortex structure for the case of no ring, 2% and 5% ring can be observed in Fig. 29. ( Figs. 30 and 31). Based on the results obtained through experiments and computations, it is definite that the side force can be reduced with the help of rings for the case of cone-cylinder configuration. Since the use of ring acts as an obstacle in the flow, it becomes imperative to obtain the effect of the ring on the other parameters such as lift and drag. Figure 32 shows the variation of the lift coefficient with the use of rings. The use of 2% ring did not have any significant effect on the lift coefficient at lower angles of attack, while the use of 5% ring reduced the lift on the body. However, at higher angles of attack, the use of rings increased the lift on the body (α > 40 o ). This is mainly because of the formation of strong trailing vortices as observed from the vorticity magnitude contour. Figure 33 shows the effect of the ring on the drag. As observed in Fig. 32, the drag also reduced at lower angles of attack for the case of 5% ring, however, it increased slightly at a higher angle of attack with the use of ring. Although reducing the side force on a cone-cylinder geometry at higher angles of attack is a very challenging task. In the present investigation, an effort has been made to control the side force using an axisymmetric ring. It is expected that use of such a ring will not change the flow physics with variation in the roll angle. The outcome of this investigation will be highly useful in the design of the highly tactical missiles and aircraft. CONCLUSION Experiments and computations were performed on a cone-cylinder geometry at different angles of attack. With the increase in the angle of attack, the overall side force increased on the body, however, unlike the other nose shapes of the ogive, blunt etc., the variation in the side force was found to highly sensitive towards the angle of attack. Results indicated that the use of a circular ring having a height of 2% of the local diameter placed at X/D ≈ 1, was not able to reduce the side force at lower angles of attack. The ring with higher height placed suitably in the downstream at X/D ≈ 2.5 could reduce the side force at all the angles of attack for the present flow condition, however, reversal in the direction of the side force was observed. With increase in the angle of attack, the vortices became stronger. In the angle of attack range of 0 to 40 o , the height of the 2% ring placed at X/D ≈ 1 was not sufficient to disturb the growing vortex and hence the side force remained almost undisturbed with the use of 1% ring at 0 o < α < 40 o . At α > 40 o , the vortices become more unstable and hence even a small ring height could alter the flow field and hence reversal in the side force was observed
8,811.6
2020-03-03T00:00:00.000
[ "Engineering" ]
Disentangling Dynamical Quantum Coherences in the Fenna-Matthews-Olson Complex In the primary step of light-harvesting, the energy of a photon is captured in antenna chlorophyll as an exciton. Its efficient conversion to stored chemical potential occurs in the special pair reaction center, which has to be reached by down-hill ultrafast excited state energy transport. The interaction between the chromophores leads to spatial delocalization and quantum coherence effects, the importance of which depends on the coupling between the chlorophylls in relation to the intensity of the fluctuations and reorganization dynamics of the protein matrix, or bath. The latter induce uncorrelated modulations of the site energies, and thus quantum decoherence, and localization of the exciton. Current consensus is that under physiological conditions quantum decoherence occurs on the 10 fs time scale, and quantum coherence plays little role for the observed picosecond energy transfer dynamics. In this work, we reaffirm this from a different point of view by finding that the true onset of electronic quantum coherence only occurs at extremely low temperatures of ~20 K. We have directly determined the exciton coherence times by two-dimensional electronic spectroscopy of the Fenna-Matthew-Olson complex over an extensive temperature range with a supporting theoretical modelling. At 20 K, electronic coherences persist out to 200 fs (close to the antenna) and marginally up to 500 fs at the reaction-center side. It decays markedly faster with modest increases in temperature to become irrelevant above 150 K. This temperature dependence also allows disentangling the previously reported long-lived beatings. We show that they result from mixing vibrational coherences in the electronic ground state ... We show that they result from mixing vibrational coherences in the electronic ground state. We also uncover the relevant electronic coherence between excited electronic states and examine the temperature-dependent non-Markovianity of the transfer dynamics to show that the bath involves uncorrelated motions even to low temperatures. The observed temperature dependence allows a clear separation of the fragile electronic coherence from the robust vibrational coherence. The specific details of the critical bath interaction are treated through a theoretical model based on measured bath parameters that reproduces the temperature dependent dynamics. By this, we provide a complete picture of the bath interaction which places these systems in the regime of strong bath coupling. We believe this main conclusion to be generically valid for light harvesting systems. This principle makes the systems robust against otherwise fragile quantum effects as evidenced by the strong temperature dependence. We conclude that nature explicitly exploits decoherence or dissipation in engineering site energies to yield downhill energy gradients to unerringly direct energy, even on the fastest time scales of biological processes. The question how biological function emerges from the atomic constituents of matter has intrigued scientists since the early days of quantum mechanics 1, 2 . Encouraged by the success of quantum theory to describe matter, its pioneers rapidly explored extending it to the world of chemistry and biology in the early 20th century. It was only in recent times when modern ultrafast spectroscopic tools became available that the search for nontrivial quantum effects in the primary steps of biological processes was made possible, which led to the prospect for the foundation of the field of quantum biology [3][4][5][6] . Recent experimental results obtained for the well-characterized Fenna-Mathews-Olson (FMO) protein complex have been interpreted as evidence of long-lived electronic quantum coherence in the primary steps of the energy transfer [7][8][9][10] . A functional role of long-lived quantum coherence was proposed in that it would speed up the transfer of excitation energy under ambient conditions 11 . These works triggered tremendous interest in different fields, ranging from quantum chemistry to quantum information science. A key parameter is the strength of the coupling of the exciton system to environmental fluctuations, which is related to the reorganization energy. For a conceptual understanding, initial theoretical analysis was built upon the choice of a rather small reorganization energy of 35 cm −1 to fit the reported long lifetime of electronic coherence 11 . Yet, even with this small value, Shi and coworkers found a shorter lifetime for the expected electronic coherence from more advanced calculations of the experimental 2D electronic spectra 12 . The interpretation of the long lifetime of the electronic coherence was further questioned by numerically exact results obtained within the quasi adiabatic propagator path integral method with an experimentally determined spectral density with a considerably larger reorganization energy 13 . Coker et al. and Kleinekathöfer et al. have calculated site-dependent reorganization energies with refined atomic details by advanced molecular dynamics simulations 14,15 . They found significantly larger values of the reorganization energies in the range of 150 to 200 cm −1 . With this disagreement, we have revisited the energy transfer of the FMO complex at room temperature experimentally 16 using 2D electronic spectroscopy to extract the electronic coherence time scales. Instead of a long-lived electronic coherence, the experiment, after having passed a self-consistency verification, yielded a considerably shorter coherence lifetime of 60 fs. This observed timescale for decoherence excludes any functional role for coherent energy transfer in the FMO complex, which occurs on the time scale of several picoseconds at room temperature. Another potential key role for the electronic coherence is played by the pigment-protein host molecular vibrations [17][18][19][20] . In contrast to electronic coherence, the pigment-localized vibrations typically last for picoseconds but are not expected to enhance energy transfer in general. Yet, Plenio and coworkers have suggested the concept of vibrationally enhanced electronic coherence 21 . They reported that in a vibronic model dimer, electronic quantum coherence may be resonantly enhanced by long-lasting vibrational coherence 22 . Instead of an enhancement, Tiwari et al. alternatively suggested that nonadiabatic electronic-vibrational mixing may resonantly enhance the amplitude of particular, delocalized anticorrelated vibrational modes of the electronic ground state 23 . While in principle, this mechanism is also possible in the presence of weak electronic dephasing 24 , realistic values of the strengths of the electronic and vibrational dampings leads to a complete suppression of this mechanism 25 . Electronic-vibrational mixing was also examined in a simple dimer model 26 , but the subsequent theoretical calculations show no evidence of an enhancement of the electronic coherence 27 . More recently, the coherent exciton transfer in the FMO complex has been revisited by Zigmantas and coworkers at 77 K 28 . The long-lived oscillations have been carefully assigned to the vibrational coherence of the electronic ground state. Due to strong dissipation, the lifetime of the electronic coherence was too short to be precisely determined even at 77 K. Despite the extensive work on this problem, a complete picture of the electronic coherence and its role in the electronic-vibrational mixing for the energy transfer in the FMO complex is still elusive. Here, we study the energy transfer process in the FMO complex with the explicit aim to ob-serve clear evidence for the onset of electronic coherence effects in energy transport by decreasing temperature. For this, we measure the 2D electronic spectra of the FMO complex in the regime of very low temperature. Specifically, we examine the electronic dephasing by directly measuring the anti-diagonal bandwidth of the main peaks, along with the decays in the cross peaks related to the inter-exciton coupling. It is only at very low temperatures (20 K) that the amplitudes and decays in these electronic coherence signatures become comparable to the energy transfer times. We provide a comprehensive analysis by a global fitting approach, and the subsequent Tukey window Fourier transform allows us to disentangle the electronic coherence from vibrational coherence. By this, we uncover that the longest lived electronic coherence is observable up to 500 fs between the two excitons closest to the reaction centre side. Due to down-hill energy transfer, the electronic coherence of the two higher-energy excitons close to the antenna side shows a much faster decay with a lifetime <60 fs. We furthermore measure the coherent energy transfer over an extensive temperature range. Based on these temperature-dependent measurements, we are able to construct a unifying exciton model, which captures the coherent energy transfer over the entire temperature range studied. Moreover, we investigate the temperature-dependent non-Markovianity of the transfer dynamics to show that the bath fluctuations are uncorrelated even at low temperatures. By this unprecedented combination of experimental and theoretical efforts, we are able to provide a complete picture of quantum coherent effects in the FMO complex over the entire regime from high to low temperatures in one experiment and one theoretical model. Due to the generic structure of the FMO protein, we expect our observations to be extended to other more complicated photosynthetic protein complexes and even photovoltaic devices 29 . Results The solution of the FMO protein complex is prepared in a home-built sample cell and mounted in a cryostat (Oxford Instrument). More details of the sample preparation are given in the Materials and Methods section. Fig. 1(a) shows the structural arrangement of the bacteriochlorophyll a (Bchla) chromophores embedded in the protein matrix (data from 3ENI.pdb). The measured absorption spectrum of the FMO complex at 80 K and the laser spectrum used in this study are shown in the SI. Two-dimensional electronic spectroscopy We measure the 2D electronic spectra of the FMO complex in the range from 20 to 150 K. The details of the 2D spectrometer are given in the Materials and Methods section. The real parts of the 2D electronic spectra at 20 K for selected waiting times T are shown in Figs. 1(b-e). The positive and negative amplitudes in the peaks represent the excitation transitions of the ground-state-bleach (GSB) and the excited-state-absorption (ESA), respectively. In Fig. 1(b), we show the measured 2D electronic spectrum at 20 K for T=30 fs. The exciton states in the FMO complex are located in the frequency range from 12120 to 12700 cm −1 , which is marked by black dashed lines. At T=30 fs, we observe a dramatic stretch of the main peaks along the diagonal, which illustrates the strong inhomogeneous broadening. In addition, one off-diagonal feature corresponding to the ESA is observed at (ω τ , ω t ) = (12300, 12580) cm −1 . At T=50 fs as shown in Fig. 1(c), we observe that the 2D spectrum has not changed significantly, except that the elongation of the main diagonal peaks is slightly reduced. However, the elongation of the main peaks along the diagonal has dramatically reduced again at T=510 fs in Fig. 1(d). Moreover, we observe that the main peaks of the higher exciton states are replaced by one peak with ESA features. A new cross peak appears at (ω τ , ω t ) = (12340, 12120) cm −1 , which provides evidence of the down-hill energy transfer from higher exciton states to the lowest ones. Its amplitude is further increased at T=1005 fs, see Fig. 1(e). In addition to the main and cross peaks in the frequency range from 12120 to 12700 cm −1 , we observe more cross peaks appearing at the upper-left side of the 2D electronic spectrum at T=1005 fs. This provides evidence of the vibrational progression in the FMO complex. To examine the lifetime of the electronic dephasing, we analyze the anti-diagonal bandwidth of the lowest exciton peak at (ω τ , ω t ) = (12120, 12120) cm −1 for T=30 fs. To quantify the associated lifetime of the optical coherence, the broadening of the peaks is modeled by Lorentizan lineshapes. More details of the fitting procedure are described in the SI. By this, we are able to capture the electronic dephasing between the electronic ground and Energy transfer and coherent dynamics To examine the time-dependent coherent dynamics in the 2D spectra, we extract the magnitudes of the cross peaks at different waiting times. In Fig. 2(a), the trace (red line) represents the time evolution of the amplitude of the cross peak at (ω τ , ω t ) = (12340, 12120) cm −1 between exciton 1 and 3 (marked as 'CP13' in Fig. 1(d)). The underlying kinetics (black dashed line) is fitted by an exponential function and the resulting residual is shown as a black solid line in Fig. 2(b). The raw data of the oscillations are further purified by a Fourier filter with a Tukey window (<1000 cm −1 ) in Fig. 2(b). With this refined trace, we retrieve the coherent dynamics by performing a wavelet analysis. The details of the Tukey window Fourier transform and the wavelet analysis are given in the SI. The time evolution of the cross-peak coherence is shown in Fig. 2 Having analyzed the vibrational coherences, we next examine the time scales and the path-ways of the energy transfer by the global fitting approach 31 . First, we construct a 3D data set by combining a series of 2D electronic spectra with evolving waiting times. At least two exponential functions have been used to achieve a converged fitting. This yields the time constants of the en- Electronic quantum coherence To capture the signature of electronic coherence, we address the cross-peak dynamics associated to the two excitons 1 and 2 with lowest energy. We take the timedependent magnitude of the cross peak at (ω τ , ω t ) = (12120, 12270) cm −1 (marked as 'CP21' in Fig. 1(b)) to minimize contributions by the energy transfer dynamics. Again, the residual obtained after removing the kinetics and after polishing by a Tukey window Fourier transform is shown as black circles in Fig. 3(a). Here, the oscillations are induced by the electronic coherence convoluted with vibrational coherences. To disentangle both, the oscillations and decay rates are fitted to exponentially decaying sine functions to extract the oscillation frequencies and the lifetimes of the coherences. We start the fit by assuming the frequencies 68, 150, 180 and 202 cm −1 , which represent the vibrational modes found experimentally as discussed above and which agree with the known modes from the FLN experiment 30 . Moreover, we have obtained the electronic energy gap of 150 cm −1 between exciton 1 and 2 by theoretical calculations (see below), which also agrees with previous results 32 . In addition, to remove the low-frequency oscillations, one additional frequency of 17 cm −1 is included to achieve the best fit with R-square > 0.97. It represents the lowest frequency resolved associated to our basic time step. All the fitting procedures are performed using the Curve Fitting Toolbox in Matlab 2013(b); the details are given in the SI. We show the highquality fitting results by the red solid line in Fig. 3(a). The green shadow indicates the boundaries of 95% of fit confidence. This effectively allows us to separate the electronic coherence of 150 cm −1 from vibrational coherences. The oscillation related to the electronic coherence is shown in Fig. 3(b) and yields a decay time constant of 105±26 fs. With the frequency of 150 cm −1 , it is clearly observed that the electronic coherence is sustained over only two oscillation periods and disappears within 500 fs completely. More important, we observe that the identified oscillations of the electronic coherence are quite significant. They are larger than 5% of the maximum strength of the 2D spectra at 20 K. In addition, the identified vibrational coherences are shown in the SI. Following the same procedures, we analyze the coherent dynamics of the cross peak at (ω τ , ω t ) = (12120, 12270) cm −1 (CP21) at different temperatures (50, 80 and 150 K) and plot the corresponding traces in Fig. 3(c), (e) and (g), respectively. The decay time constants for different temperatures are shown in Fig. 1 (j) marked as "decoherence". The resolved extracted electronic coherences are shown in Fig. 3(d), (f) and (h), respectively. Noticeably, at 50 K, the electronic coherence lasts less than 500 fs, with a decay time constant 96 ± 40 fs. The lifetime of the electronic coherence is significantly reduced at 80 K. We measure a decay time constant of 81 ± 26 fs. At 150 K, the red solid line in Fig. 3(h) clearly shows that the electronic coherence is strongly damped and the oscillation does not survive even a single oscillation period. Again, the associated vibrational coherences retrieved during the fitting procedures are shown in the SI. To study the coherence of excitons in states of higher energy, we monitor the dynamics of ESA peaks at ω t = 12580 cm −1 . Our theoretical calculations (see details in the next section) retrieved energy levels of the excitons 2, 3, 5 and 7 highlighted by the corresponding ω τ -lines in Fig. 4(a). The intersection points of two marker lines are denoted as A, B, C and D, respectively, to which the excited state dynamics of the excitons 2, 3, 5 and 7 corresponds. Following the same procedure as above, we extract the time evolution of the amplitude of the cross peaks at A, B, C and D and remove the underlying kinetics by the global fitting approach. The obtained residuals of these peaks after the Fourier treatment are shown in Fig. 4(b), (c), (e) and (f), respectively, for increasing waiting times. More details of the fitting procedure are presented in the SI. By this, a high-quality fit is obtained, which is shown as the red solid line in Fig. 4(b). The extracted electronic coherence is presented as red solid line in Fig. 4(d). We find the frequency of 210 cm −1 of this coherent oscillation, which is in excellent agreement with the energy gap between exciton 2 and 5 from our theoretical calculations. Next, we examine the subsequent coherent dynamics of the ESA peak B. From our theoretical analysis, we expect this to be a peak that originates from the ESA of exciton 5. After repeating the fitting procedure described above, we obtain the residual shown in Fig. 4(c). The measured residuals and the results of the fits are shown as black dots and blue solid line, respectively. Furthermore, we show the separate electronic coherence as blue solid line in Fig. 4(d). Again, the obtained frequency of 206 cm −1 matches the energy gap between exciton 5 and 2 exactly. Interestingly, these well-resolved electronic coherences in Fig. 4(d) show evidence of anti-correlated oscillations with a slight phase offset. From our theoretical analysis, we uncover the coherence between exciton 2 and 5 is dominated by the strong electronic couplings of pigment 4 and (5, 6). Details of the transformation from the site to the exciton basis are given in the SI. The residuals of the ESA peaks C and D are shown as black dots in Fig. 4(e) and (f), respectively. The fits to the oscillations are shown as red and blue solid lines in Fig. 4(e) and (f) and the extracted electronic coherences are shown as red and blue solid lines in Fig. 4(g). We observe two electronic coherent oscillations both with the frequency of ∼310 cm −1 , which agrees perfectly with the energy gap between exciton 3 and 7. As determined by the basis transformation, this coherence is dominated by the electronic coupling between pigment 1 and 2. Moreover, compared to the lifetime of the coherence between exciton 2 and 5, these coherences show smaller time constants of 34 and 59 fs. Hence, the electronic coherence lasts shorter for higher exciton states, which is due to the down-hill energy transfer. Moreover, the larger energy gap between exciton 3 and 7 produces a shorter lifetime of the electronic coherence due to the faster energy transfer. Theoretical calculations We construct a Frenkel-exciton model to study the coherent FMO dynamics. The electronic transitions in the pigments are approximated by optical transitions between two energy eigenstates and the electronic couplings between pigments are calculated within the dipole approximation. Moreover, to include fluctuations of the electrostatic interactions, the system is linearly coupled to a harmonic reservoir. For simplicity, we consider a standard Drude bath spectral density. Moreover, a particular localized vibrational mode of 180 cm −1 is coupled to the electronic system to investigate the role of vibrational/vibronic coherence. This vibrational mode is known to be the most relevant out of a group of 44 vibrational modes. The 2D electronic spectra are calculated by a time non-local quantum master equation 33,34 and the time evolution of the peaks are obtained by the equation-of-motion phase-matching approach 35 . More details are given in the Materials and Methods section and the SI. We initially choose the site energies from previous works 32 and then optimize them by simultaneously fitting to the experimental absorption spectra measured at different temperatures. After that, we calculate the 2D electronic spectra and refine then the system-bath interaction strength by comparing the calculated electronic dephasing lifetimes to the experimental ones at different temperatures. By this, we are able to develop a unique system-bath model with a single set of parameters. In particular, we are now able to calculate the 2D spectra. We show the calculated waiting time traces of the cross peak at (ω τ , ω t ) = (12120, . We observe that the electronic coherence lasts for two oscillation periods and has disappeared at 500 fs. More importantly, based on the wavelet analysis, the oscillation phase retrieved in (b) perfectly matches that revealed by the experimental data shown in Fig. 3(d). More details of the comparison are shown in the SI. Moreover, the electronic (black dashed line) and vibrational (black dash-dot line) coherences at 80 and 150 K are shown in Fig. 5(d) and (f). On the basis of the results at different temperatures, we can safely conclude that the lifetime of the vibrational coherence at 180 cm −1 is not significantly shortened for increasing temperature. However, the lifetime of the electronic coherence is dramatically reduced when temperature is increased. The same conclusion can be drawn for the higher-energy excitons at different temperatures; the details of the coherent lifetimes of higher excitons are shown in the SI. Discussion An important question related to the exciton transfer dynamics is about the nature of the bathinduced fluctuations. It may be characterized by the measure of the non-Markovianity which quantifies how strongly the dephasing and relaxation dynamics departs from ordinary Markovian, i.e., memory-less behaviour. In principle, a highly structured environment consisting of several localized vibrational modes of the FMO protein may give rise to significant non-Markovian dynamics. Early numerically exact path-integral calculations 36 on the basis of an experimentally determined spectral density have shown that the exciton dynamics is purely Markovian at ambient conditions. This expectation has been confirmed recently experimentally by comparing the decay time of optical dephasing and electronic quantum coherence in the FMO complex 16 . The equivalence of the time scales of the optical dephasing and the electronic decoherence reveals that the dynamics of energy transfer in the FMO complex at room temperature is fully Markovian. Here, we provide a complete picture of the role of the non-Markovianity in the FMO complex at different temperatures. As discussed above, we have measured, at 20 K, the longest life time of the electronic quantum coherence for the dephasing of the two lowest-energy excitons, with the decay time of 105 fs. On the other hand, the analysis of the anti-diagonal band width yielded a decay time of 197 fs for the optical dephasing of excitons 1. The difference of almost a factor of 2 is due to the low temperature, but is still covered by a fully Markovian description of the transfer dynamics 38 . This finding is also in agreement with low-temperature calculations 36 . On the basis of these studies, we conclude that non-Markovian energy fluctuations of the pigments induced by pigment-hosted molecular vibrations do not play a role in the energy transfer of the FMO complex. Even, due to its simple structure, we believe that this conclusion can be extended to more complicated photosynthetic protein complexes. Instead of long-lived electronic coherence, our study uncovers a long lasting beating dynamics composed of vibrational coherences in the range of 180 cm −1 in the electronic ground state. As summarized in the Introduction, several studies have suggested a resonant enhancement of the short-lived electronic coherence by the long-lived vibrational coherence in the FMO complex. However, our work clearly shows no such a resonant enhancement of the electronic coherence during the population transfer. In contrast, from the measured and calculated 2D electronic spectra, we have retrieved the scale of the reorganization energy of 120 cm −1 , which manifests a quite strong system-bath interaction that rapidly destroys the phase of the electronic quantum coherences between pigments in the FMO complex. Hence, instead of the energy transfer being enhanced by strongly delocalized exciton wave functions, the rather large reorganization energy sharpens the limits for delocalization and the energy pathways between pigments. Then, the efficient down-hill transfers of not too strongly delocalized excitons are determined by a simple thermal distribution of excitons which rapidly arises after an initial ultrafast non-equilibrium dynamics triggered by the photo-excitation. Conclusions In this paper, we provide a complete picture of the coherent contribution to energy transfer in the FMO complex by 2D electronic spectroscopy in the entire regime from low to high temperatures. In particular, the spectroscopic measurements at low temperature of 20 K allows us to provide unambiguous evidence of the lifetime of the electronic quantum coherence and to disentangle the electronic coherence from long-lived vibrational coherence. Interestingly, due to the down-hill energy transfer, the electronic coherence between the two lowest excitons is marginally observable out to 500 fs at 20 K. However, the coherence lifetime of higher excitons is dramatically reduced by the population transfers. This analysis allows us to disentangle the previously reported long-lived beating of cross-peak signals and to show that they are composed of mixed ground state molecular Raman modes. Moreover, we uncover that the lifetime of electronic coherence is significantly modulated by temperature, while, in contrast, the resonant beatings of vibrational coherences last for picoseconds even at 150 K. A thorough analysis on the basis of a unique combination of experimental data and theoretical modelling enables us to provide a reliable estimate for the decisive parameter of the reorganization energy of the FMO complex. We find a reorganization energy of 120 cm −1 , which represents a strong system-bath interaction of the pigments with their protein environment. This coupling is sufficient to significantly reduce the lifetime of electronic coherences and leads to a rapid intermittent localization of the electronic wavefunction on a few molecular sites. Instead of a long-lived quantum coherent energy transfer, we provide a different picture of a down-hill energy transfer, in which the pathways of population transfers are dynamically constructed simply by following lower site energies of the pigments involved and by rather fast electronic damping due to significant nuclear reorganization in the excited state. The latter sharpens the funnels of the energy flow in the FMO complex. In general, we conclude that the energy transfer in the FMO complex is dominated by the thermal dynamics of weakly delocalized excitons after an initial ultrafast non-equilibrium photon-excitation. Due to common features in the bath for all light harvesting systems, despite the relative simplicity of FMO, we believe that this conclusion can be further extended to the other more complicated photosynthetic protein complexes. Materials and Methods Sample preparation. The FMO protein was isolated from the green sulfur bacteria C. tepidum (see the SI for more details). The sample was dissolved in a Tris buffer at PH 8.0. It was filtered with a 0.2 µm filter to reduce light scattering. The sample was then mixed 70:30 v/v in glycerol and kept in a home-built cell with an optical pass length of 500 µm. The cell was mounted in the cryostat (MicrostatHe-R) for the low-temperature measurements. 2D Electronic measurements with experimental conditions. Details of the experimental setup have already been described in earlier reports from our group 16 Phasing of the obtained 2D spectra was performed using an "invariance theorem" 37 . The c mξ is the coupling constant between the mth pigment and ξth fluctuation mode. The bath is specified by the spectral density J m (ω) = π ξh 2 c 2 mξ δ(ω − ω mξ ). We include one overdamped mode and one underdamped mode to study the impact of vibrational coherence. The corresponding spectral density can be expressed as J(ω) = 2ΛΓω Here, Λ and Γ −1 are the damping strength and the bath relaxation time of the overdamped mode, respectively. S, ω vib and γ −1 vib are the Huang-Rhys factor, the vibrational frequency and the vibrational relaxation time of the underdamped mode, respectively. This form has been shown to describe the experimental data 16 correctly. The nonequilibrium dynamics of the system-bath model is calculated by a time-nonlocal quantum master equation. The details of this method are described in the SI. Correlation theory is used to calculate the absorption spectrum of the FMO complex, where ρ g = |g g| and a δ-shaped laser pulse is assumed. · rot denotes the rotational average of the molecules with respect to the laser direction. Moreover, the 2D electronic spectra are obtained by calculating the third-order response function Here, τ is the delay time between the second and the first pulse, T (the so-called waiting time) is the delay time between the third and the second pulse, and t is the detection time. To evaluate 2D electronic spectra, we need the rephasing (RP) and non-rephasing (NR) contributions of the thirdorder response function, i.e., S (3) (t, T, τ ) = S NR (t, T, τ ). Assuming the impulsive limit (the δ-shaped laser pulse), one obtains The total 2D signal is the sum of the two, i.e., I(ω t , T, ω τ ) = I RP (ω t , T, ω τ ) + I NR (ω t , T, ω τ ). The model parameters of the site energies and electronic couplings are initially taken from Ref. 32 and further refined during a simultaneous fit to the absorption spectra of the FMO complex at the considered temperature. To precisely determine the reorganization energy, the parameters are further refined by fitting of the theoretical results to the experimental anti-diagonal bandwidth of the main peak of exciton 1.
7,030.8
2021-04-03T00:00:00.000
[ "Chemistry", "Physics" ]
Functional Verification of the Four Splice Variants from Ajania purpurea NST1 in Transgenic Tobacco : Ajania purpurea is a small semi-shrub in the Asteraceae family. Its corolla is purplish red from the middle to the top, and its leaves and flowers are all fragrant. It can be introduced and cultivated as ornamental plants. In order to survive adversity, plants actively regulate the expression of stress response genes and transcripts. Alternative splicing is a common phenomenon and an important regulation mode of eukaryotic gene transcription, which plays an important role in various biological processes. In this study, four splice variants of the NST1 gene were identified from A. purpurea , and the molecular mechanism of NST1 alternative splice variants involved in abiotic stress was explored through bioinformatics, transgenics and paraffin sectionalization. The analysis of amino acid sequences showed that ApNST1.1 had alternative 5 (cid:48) splicing, ApNST1.2 had alternative 3 (cid:48) splicing and ApNST1 had the two splicing types. The main conclusions from studying transgenic tobacco seedlings and adult seedlings under abiotic stress were as follows: ApNST1 , ApNST1.1 and ApNST1.3 showed salt tolerance at seedling stage, especially ApNST1.3 . At the mature seedling stage, the stem height of ApNST1.1 increased significantly, and ApNST1.1 showed obvious salt tolerance, while ApNST1.2 showed obvious cold resistance. Compared to Super35S::GFP, the xylem of ApNST1 thickened by 94 µ m, and the cell wall thickened by 0.215 µ m. These results are of great significance to the breeding and application of ApNST1 to select splice variants with more resistance to abiotic stress, and to future study in this area. At the same time, they provide a new direction for A. purpurea breeding, and increase the possibility of garden applications. Introduction Ajania purpurea C. Shih, a small semi-shrub of the genus Asteraceae, grows in alpine gravel piles, alpine meadows and shrublands at an altitude of 4800-5300 m, and is endemic to the Gundes Mountains of Tibet. Its corolla is purple-red from the middle upward; the leaves and flowers are rich in fragrance. It displays cold and drought tolerance, and it can be introduced and cultivated as a good ornamental garden plant [1]. NAC SECONDARY WALL THICKENING PROMOTING FACTOR 1 (NST1) is an important NAC transcription factor for secondary wall biosynthesis, and NST1 regulates the thickening of the secondary wall of anther endothelial cells necessary for anther cracking [2]. In addition, NAC transcription factors can affect the formation of the secondary wall through the desiccation pathway. Liu et al. found that abscisic acid regulates the formation of secondary cell walls and lignin deposition of Arabidopsis thaliana (L.) Heynh by phosphorylating NST1 [3]. Adverse conditions such as cold, high temperature, salinity and drought are great threats to crop production at present, seriously affecting the growth and development, yield and quality of plants. Plants have evolved sensitive coping systems to cope with various 2 of 14 abiotic stresses [4,5]. NAC (NAM, ATAF and CUC) family transcription factors are a large class of plant-specific transcription factors. The N-terminus is highly conformed and is a DNA-binding region, while the C-terminus is a transcriptional activation region with great structural changes [6,7]. NAC family transcription factors are involved in the regulation of many biological processes in plants, including growth and development, organ formation and pathogen defense [8]. In addition, increasing numbers of studies have shown that NAC family transcription factors play an important role in abiotic stress responses such as drought [9], salt [10] and low temperature [11][12][13][14]. Most studies on the NAC family have focused on the resistance of crops, such as wheat, rice [8,[15][16][17], and tomato [18,19], but the mechanism of the NAC family in A. purpurea remains unclear. This study will provide a preliminary analysis of NST1. Alternative splicing (AS) is a common phenomenon and an important regulatory mode of eukaryotic gene transcription, which plays an important role in various physiological processes of organisms [20]. In nature, most precursor mRNAs (pre-mRNAs) transcribed from eukaryotic DNA contain exons and introns distributed in the noncoding region between exons [21,22]. Proper editing of precursor mRNAs is essential for eukaryotes. However, for many genes, the splicing of precursor mRNA is not unique. The same precursor mRNA may form multiple mature transcripts through different splicing methods, which in turn encode proteins with different functions, a phenomenon called alternative splicing. Variable splicing is a common phenomenon for most eukaryotes. Variable splicing has been reported in 61% of Arabidopsis genes, while AS events occurred more frequently under an abiotic stress [23]. There are many types of alternative splicing; the most common are the following 5 types: retained intron (RI), skipped exon (SE), alternative 5 splice site (A5SS), alternative 3 splice site (A3SS), and mutually exclusive exons (MXE). There are several other types that are less common, such as alternative promoters and alternative poly(A) [24]. Variable splicing has important biological functions. Overall, it increases the diversity of proteins encoded in organisms and regulates gene expression at the posttranscriptional level. In plants, alternative splicing has been found to play a role in many important physiological processes, including photosynthesis [25], stress response [26], flowering [27,28], and photoperiod regulation [29]. Confirming the importance of splicing in plant stress adaptation, key players of stress signaling have been shown to encode alternative transcripts, whereas mutants lacking splicing factors or associated components show a modified sensitivity and defective responses to abiotic stress. When plants in natural conditions face harsh changes in the environment and climate, they must withstand abiotic stresses such as drought, salinity, and extreme temperatures. In order to survive adversity, plants actively regulate the expression of stress response genes and transcripts. Alternative splicing is a regulatory process by which different isomers are produced, enhancing the diversity of plant proteomes. Several studies have confirmed that alternative splicing plays an important role in plant performance, adaptability and survival. We have found the performance of the four splice variants in the purple flower under different stresses is more evidence of this view. In our previous work, we cloned NST1, SND1 and other NAC family genes, and carried out phylogenetic analysis, focusing on the evolutionary history of this wild chrysanthemum to fully understand its differentiation time. The purpose of this study was to further investigate the functional roles of four kinds of variable splice variants in the cloning of NST1 gene, to aid in the development of resistant plants. Specifically, we studied the variable-splicing types of four splice variants, ApNST1, ApNST1.1, ApNST1.2 and ApNST1.3, and observed their growth phenotypes in natural and sterile environments, including the xylic thickness of the transverse section of four splice variants in tobacco (Nicotiana tabacum cv. Nc89). Our ultimate aim is to provide a variety of possibilities for the selection and cultivation of A. purpurea, which is of great significance for the multi-faceted applications of A. purpurea. Furthermore, enriching the resistance breeding of A. purpurea is of great significance for chrysanthemum breeding. Cynara scolymus L. is a plant in the Compositae family and is closely related to A. purpurea. Based on the CsNST1 (accession number LEKV01004794, NCBI) CDS sequence downloaded from NCBI, the primers NST1-F (5 -ATGCTGCCCTCTCCTTTGAAT-3 ) and NST1-R (5 -GCGAATTTGACCGGATTGG-3 ) were designed to amplify ApNST1, using cDNA from A. purpurea as a template. The PCR fragments were obtained and cloned into plant expression vector Super35S::GFP by double digestion technique. Tobacco Transformation Plant expression plasmids were transferred into competent cells of the Agrobacterium tumefaciens strain GV3101 through freezethaw treatment. The transformed A. tumefaciens colonies were selected on LB-agar plates containing 50 mg·L −1 of kanamycin, 50 mg·L −1 of rifampicin and 50 mg·L −1 of gentamicin. The positive colonies were identified using PCR amplification of the inserted genes and were used for the tobacco transformation as previously described [30]. The transgenic plants were confirmed by qRT-PCR ( Figure S1). In the follow-up experiment, Super35S::GFP was used as the control group. Quantitative Real-Time PCR Assay Quantitative real-time PCR (qRT-PCR) was performed using ChamQ Universal SYBR qPCR Master Mix (Vazyme, Nanjing, China) according to the manufacturer's instructions under following conditions: initial denaturation at 95 • C for 30 s, 40 cycles at 95 • C for 10 s and 60 • C for 30 s, At the end of the qRT-PCR cycles, the products were subjected to melt curve analysis to verify the specificity of PCR amplification. Three independent experiments were performed. Relative expression levels were calculated using the 2 −∆∆Ct formula with Actin 7 as a housekeeping gene [31]. The used primers are shown in Supplemental Table S1. The expression of transgenic tobacco was significantly higher than that of WT, which demonstrated the successful transfer and expression of four different NST1 genes in tobacco. Transgenic Tobacco Growing Conditions In this study, transgenic tobacco was investigated in a sterile environment and artificial climate chamber environment. First, the seeds were soaked in sterile water, and then the surface of the seeds was sterilized with 75% alcohol for 10 s, washed with sterile water 3 times, sterilized with 2% sodium hypochlorite for 10 min and washed with sterile water 3 times. Then, they were seeded into resistance screening medium with re-suspension, and then placed in an incubator under a light intensity of 35 µmol·m −2 ·s −1 , 28 • C light for 16 h and 25 • C darkness for 8 h to obtain aseptic seedlings. In this study, Murashige and Skoog medium (MS) containing 50 mg/L hygromycin was used as the resistance screening medium and MS resuspension contained 0.15% agar [31]. MS + 30 g·L −1 sucrose + 6 g·L −1 agar is a basic medium. The healthy seedlings were transferred to the corresponding medium for treatment. An MS medium containing 200 mmol·L −1 NaCl (a medium stress) and 200 µmol·L −1 abscisic acid (ABA) was placed in the light incubator at a light intensity of 35 µmol·m −2 ·s −1 , 28 • C for 16 h and 25 • C for 8 h for salt treatment and abscisic acid treatment, respectively. At the same time, MS medium was placed in a 4 • C incubator for low-temperature treatment. 2.6.2. Salt, Drought and Low-Temperature Treatment under Natural Conditions After 15 days of seeding and robust growth, the sterile seedlings were planted into turf soil and treated in the environment of an artificial climate chamber. The soil was watered slightly to moisten the soil, and then shaded for two days to maintain water and take root. When the seedlings grew steadily and robustly, they were treated with salt, drought and low temperature, and the salt concentration was 200 mmol/L. Low-temperature treatment was maintained using a 4 • C low-temperature incubator. Fresh Weight, Root Length and Stem Height Were Measured The treated tobacco was weighed to determine the fresh weight of the plant. The root length of tobacco before and after treatment was measured using Image J 1.4.3.67. The same software was used to measure the height of adults before and after treatment and after rehydration. Paraffin Section The treated transgenic tobacco was taken as the material, and stalks with lengths of 1 cm at 1/3 above the soil surface were taken as samples for treatment. The stalks were placed in FAA (70% alcohol: formalin: acetic acid 18:1:1), fixed for 24 h and then soaked in hydrogen peroxide and glacial acetic acid (1:1). The material was softened in the mixed solution for 48 h, after which, the sample was dehydrated with ethanol and embedded in paraffin for paraffin penetration embedding. The sample was divided into 10 µm sections using a Microslicer (Leica RM-2145). Finally, the sample was stained with saffron-solid green, examined using a microscope and analyzed via image collection. Image J 1.4.3.67 was used to measure xylem thickness and cell wall thickness for statistical analysis. Analysis of Data All treatments mentioned in this study involved at least three independent biological and technical replicates. Microsoft Excel 2016, GraphPad Prism 8.0.2 and IBM SPSS 26 were used for data statistical testing and analysis. Bioinformatics Analysis of the Four Splice Variants from A. purpurea NST1 We obtained sequence amplification primers from CsNST1 to clone the ApNST1 gene. Then, our research group identified four splice variants, which were named ApNST1, ApNST1.1, ApNST1.2 and ApNST1.3. We compared the amino acid sequences resulting from these four splice variants, and the results are shown in Figure 1a. It was observed that the ApNST1.1 transcript had splicing sites at the 5th end of one exon, and the ApNST1.2 transcript had splicing sites at the 3rd end of another exon. ApNST1 has both of these splicing sites; i.e., ApNST1 has two splicing types: alternative 3 splicing and alternative 5 splicing, and ApNST1.1 has alternative 5 splicing. ApNST1.2 underwent alternative Horticulturae 2023, 9, 916 5 of 14 3 splicing, as shown in Figure 1b. ApNST1, ApNST1.1, ApNST1.2 and ApNST1.3 were compared with the homologous genes in the NCBI database, and it was found that their nucleotide sequences had higher homology with CsNST1, the similarity is 58.33-79.21%. Although they did not contain the NST1 domain, they maintained homology with NST1, so the naming method was still adopted in this study. It is worth noting that splice variants ApNST1 and ApNST1.2 have premature termination codons (PTCs), and this results in the loss of protein expression (Figure 1a). ApNST1.1, ApNST1.2 and ApNST1. 3. We compared the amino acid sequences resulting from these four splice variants, and the results are shown in Figure 1a. It was observed that the ApNST1.1 transcript had splicing sites at the 5th end of one exon, and the Ap-NST1.2 transcript had splicing sites at the 3rd end of another exon. ApNST1 has both of these splicing sites; i.e., ApNST1 has two splicing types: alternative 3′splicing and alternative 5′splicing, and ApNST1.1 has alternative 5′splicing. ApNST1.2 underwent alternative 3′splicing, as shown in Figure 1b. ApNST1, ApNST1.1, ApNST1.2 and ApNST1.3 were compared with the homologous genes in the NCBI database, and it was found that their nucleotide sequences had higher homology with CsNST1, the similarity is 58.33-79.21%. Although they did not contain the NST1 domain, they maintained homology with NST1, so the naming method was still adopted in this study. It is worth noting that splice variants ApNST1 and ApNST1.2 have premature termination codons (PTCs), and this results in the loss of protein expression (Figure 1a). Phenotypes of Transgenic Tobacco Seedlings under Abiotic Stress The transgenic tobacco seeds obtained by our research group were sown, and tobacco seedlings with two true leaves and similar growth potential were subjected to abiotic stress, which included treatment with salt, low temperature (4 °C) and abscisic acid (ABA). According to the analysis of phenotype, including the data on weight and root length (Figure 2), under salt stress, ApNST1, ApNST1.1 and ApNST1.3 all showed a certain salt tolerance. Among these, ApNST1.3 showed a significant change in fresh weight, compared to itself and to Super35S::GFP. However, ApNST1.2 grew slowly; its fresh weight did not change. Phenotypes of Transgenic Tobacco Seedlings under Abiotic Stress The transgenic tobacco seeds obtained by our research group were sown, and tobacco seedlings with two true leaves and similar growth potential were subjected to abiotic stress, which included treatment with salt, low temperature (4 • C) and abscisic acid (ABA). According to the analysis of phenotype, including the data on weight and root length (Figure 2), under salt stress, ApNST1, ApNST1.1 and ApNST1.3 all showed a certain salt tolerance. Among these, ApNST1.3 showed a significant change in fresh weight, compared to itself and to Super35S::GFP. However, ApNST1.2 grew slowly; its fresh weight did not change. ABA stress inhibited the growth of the plants; the seedlings did not change and the leaves displayed a yellow phenomenon. Overall, ABA also inhibited root elongation, but transgenic plants significantly moderated this effect. We know that ABA is a strong growth inhibitor, as it inhibits cell division and elongation, and can inhibit the growth of whole plants or isolated organs. Under low-temperature stress, the fresh weight of seedlings changed little and showed no cold resistance. Under normal growth conditions, ApNST1.1 plants grew vigorously, grew quickly and had large leaves. ABA stress inhibited the growth of the plants; the seedlings did not change and the leaves displayed a yellow phenomenon. Overall, ABA also inhibited root elongation, but transgenic plants significantly moderated this effect. We know that ABA is a strong growth inhibitor, as it inhibits cell division and elongation, and can inhibit the growth of whole plants or isolated organs. Under low-temperature stress, the fresh weight of seedlings changed little and showed no cold resistance. Under normal growth conditions, Ap-NST1.1 plants grew vigorously, grew quickly and had large leaves. Phenotypes of Transgenic Tobacco Mature Seedlings under Abiotic Stress The transgenic tobacco seedlings were planted in the artificial climate chamber. When their growth was relatively robust, plants with similar growth were selected to be treated with salt, low temperature and drought for 15 days, and rewatered for 7 days after treatment. The growth conditions of the plants in the two stages were observed (Figure 3a-d). The average elongation of ApNST1.1 before and after treatment was 10 cm, which was 1.3 cm higher than Super35S::GFP (Figure 3e), and the change of stalk height significantly increased. However, compared to the control group, the lengthening of the other three splice variants was slower and the final elongation was lower, indicating that salt-alkali tolerance of ApNST1, ApNST1.2 and ApNST1.3 was not obvious. Compared to Super35S::GFP, the stem of ApNST1.2 showed no obvious elongation and showed significant salt tolerance, which was consistent with the observations of the seedling phenotype. However, the transgenic tobacco plants began to grow rapidly after rewatering, indicating that ApNST1.2 was given appropriate growth conditions after being removed from the high-salt environment. ApNST1.1 could still recover normal growth, but there was no obvious growth trend after rehydration, indicating that high salt conditions did not affect its growth, and it reached flowering state before rehydration; i.e., vegetative growth was transformed into reproductive growth, and the stem no longer extended significantly. In the process of low-temperature treatment, the stem height of ApNST1 and Su-per35S::GFP changed by approx. 2 cm, similar to the change in stem height of ApNST1 under normal growth conditions. Moreover, the stem elongation rate remained unchanged, indicating that the growth of ApNST1 was not affected by a low-temperature environment and had certain cold resistance. The average elongation of ApNST1.2 before and after treatment was 6.1 cm, 4.3 cm higher than Super35S::GFP, and the cold resistance was obvious. Low temperature inhibited the growth of ApNST1.1 and ApNST1.3. Under low-temperature conditions, transgenic tobacco plants did not grow significantly after rehydration. Under drought stress, the rate of stem elongation of ApNST1, ApNST1.1, ApNST1.2 and ApNST1.3 slowed, and all of them recovered after rewatering, indicating that they had no drought resistance. After the treatment, the plants were rehydrated for 7 days, and it was found that after the water supply was restored, the plants could grow normally and did not die during the treatment (Figure 3f). Phenotypes of Transgenic Tobacco Mature Seedlings under Abiotic Stress The transgenic tobacco seedlings were planted in the artificial climate chamber. When their growth was relatively robust, plants with similar growth were selected to be treated with salt, low temperature and drought for 15 days, and rewatered for 7 days after treatment. The growth conditions of the plants in the two stages were observed ( Figure 3a-d). The average elongation of ApNST1.1 before and after treatment was 10 cm, which was 1.3 cm higher than Super35S::GFP (Figure 3e), and the change of stalk height significantly increased. However, compared to the control group, the lengthening of the other three splice variants was slower and the final elongation was lower, indicating that saltalkali tolerance of ApNST1, ApNST1.2 and ApNST1.3 was not obvious. Compared to Su-per35S::GFP, the stem of ApNST1.2 showed no obvious elongation and showed significant salt tolerance, which was consistent with the observations of the seedling phenotype. However, the transgenic tobacco plants began to grow rapidly after rewatering, indicating that ApNST1.2 was given appropriate growth conditions after being removed from the high-salt environment. ApNST1.1 could still recover normal growth, but there was no obvious growth trend after rehydration, indicating that high salt conditions did not affect its growth, and it reached flowering state before rehydration; i.e., vegetative growth was transformed into reproductive growth, and the stem no longer extended significantly. In the process of low-temperature treatment, the stem height of ApNST1 and Su-per35S::GFP changed by approx. 2 cm, similar to the change in stem height of ApNST1 under normal growth conditions. Moreover, the stem elongation rate remained unchanged, indicating that the growth of ApNST1 was not affected by a low-temperature environment and had certain cold resistance. The average elongation of ApNST1.2 before and after treatment was 6.1 cm, 4.3 cm higher than Super35S::GFP, and the cold resistance was obvious. Low temperature inhibited the growth of ApNST1.1 and ApNST1. 3. Under low-temperature conditions, transgenic tobacco plants did not grow significantly after rehydration. Under drought stress, the rate of stem elongation of ApNST1, ApNST1.1, ApNST1.2 and ApNST1.3 slowed, and all of them recovered after rewatering, indicating that they had no drought resistance. Analysis of Cross-Cut Structure of Transgenic Tobacco Stem Transection of 12-week-old transgenic tobacco stems showed epidermis, cortex, phloem, xylem and pith. Among them, the parts stained red by casserole solid green were lignified cell walls and tubes, while the parts stained green were plant cellulose cell walls and sieve tubes (Figure 4a-e). Xylem and cell wall thickness of Super35S::GFP, ApNST1, ApNST1.1, ApNST1.2 and ApNST1.3 transgenic tobacco were measured and compared, and statistical analysis is shown in Figure 4f,g. The results show that there were significant differences in xylem and cell wall thickness among the five transgenic tobacco plants. The average xylem thicknesses of Super35S::GFP and ApNST1 were 126 µm and 220 µm, respectively, and the average xylem thickness of ApNST1.3 is about 30 µm thicker than that of Super35S::GFP. Compared to Super35S::GFP, the average xylem thickness of ApNST1.2 was approx. −50 µm, which was consistent with the salt tolerance of seedlings and adult seedlings. Cell wall thickness changed little, but all of them had thickened, and ApNST1 was the most obvious in this regard. The thickening of the single cell wall was about 0.215 µm, indicating that NST1 had played a role. Indeed, it was also evident that multiple types of variable splicing using splice variants increased the impact of the NST1 transcription factor [32]. The difference in cell wall thickness was very small, which was mainly due to the different number of xylem cells. ApNST1 had an average of 17 cells in the xylem, while ApNST1.2 had an average of 5 cells. The number of xylem cells of other splice variants remained at approx. 10. (Figure 4a-e). The xylem thickness of ApNST1 and ApNST1.2 exhibits obvious differences that may be related to the fact that they have PTCs. Future studies could investigate the relationship between these factors. After the treatment, the plants were rehydrated for 7 days, and it was found that after the water supply was restored, the plants could grow normally and did not die during the treatment (Figure 3f). Analysis of Cross-Cut Structure of Transgenic Tobacco Stem Transection of 12-week-old transgenic tobacco stems showed epidermis, cortex, phloem, xylem and pith. Among them, the parts stained red by casserole solid green were lignified cell walls and tubes, while the parts stained green were plant cellulose cell walls and sieve tubes (Figure 4a-e). Xylem and cell wall thickness of Super35S::GFP, ApNST1, ApNST1.1, ApNST1.2 and ApNST1.3 transgenic tobacco were measured and compared, and statistical analysis is shown in Figure 4f,g. The results show that there were significant differences in xylem and cell wall thickness among the five transgenic tobacco plants. The average xylem thicknesses of Super35S::GFP and ApNST1 were 126 µm and 220 µm, respectively, and the average xylem thickness of ApNST1.3 is about 30 µm thicker than that of Super35S::GFP. Compared to Super35S::GFP, the average xylem thickness of ApNST1.2 was approx. −50 µm, which was consistent with the salt tolerance of seedlings and adult seedlings. Cell wall thickness changed little, but all of them had thickened, and ApNST1 was the most obvious in this regard. The thickening of the single cell wall was about 0.215 µm, indicating that NST1 had played a role. Indeed, it was also evident that multiple types of variable splicing using splice variants increased the impact of the NST1 transcription factor [32]. The difference in cell wall thickness was very small, which was mainly due to the different number of xylem cells. ApNST1 had an average of 17 cells in the xylem, while ApNST1.2 had an average of 5 cells. The number of xylem cells of other splice variants remained at approx. 10. (Figure 4a-e). The xylem thickness of ApNST1 and ApNST1.2 exhibits obvious differences that may be related to the fact that they have PTCs. Future studies could investigate the relationship between these factors. Discussion AS is an important way to generate notable regulatory and proteomic complexity in eucarya. ES and IR are the most prevalent forms of AS in eukaryotic groups, including plants [33]. The alternative splicing of 5 and 3 splicing sites is to preserve or splice all or part of the sequence of exons through the selection of 5 splicing donor or 3 splicing recipient [34]. 5 and 3 alternative splicing sites are also very common in alternative splicing types [35], but there is little progress in the research on these two splicing modes in plants, thus, further attention and research are needed in the future. The splice variants that we obtained were A5SS and A3SS; this research has enriched the chrysanthemum variable splicing types. Alternative splicing can increase the variability and complexity of the transcriptome, and AS has two main results: proteome diversification and regulation of gene expression [36]. AS has been suggested as one of the possible origins of the large phenotypic differences between species, which may be similar to the protein-coding gene pool shared by vertebrates [37]. Thus, alternative splicing is considered a "key step between transcription and translation". The AS process has been identified in many plants; it is a very common and important regulatory component in plant growth and development. We believe that with the continuous improvement of sequencing technology and analytical methods, we will have a deeper understanding of this refined post-transcriptional regulation. With the development of whole genome sequencing and study on the function of the NAC transcription factor, it has been found that NAC transcription factor is expressed in different developmental stages and tissues of plants, and is closely related to lignin synthesis, growth and development of plants, and the regulatory function of adapting to abiotic and biological stresses [38]. The role of NAC transcription factors in growth and development and stress response has been confirmed in many studies [11,12]. Meanwhile, new NAC family genes and functions continue to be discovered. In 2018, Shandong Agricultural University overexpressed the tomato transcription factor SlNAC35 gene in tomato, enhancing the cold resistance of a transgenic tomato [39]. Various studies have shown that co-transcription or post-transcription mechanisms are highly induced by abiotic stress and involve a large number of stress-related genes, confirming the importance of alternative splicing in plant performance, adaptability and stress resistance. In the cloning of the NST1 gene, our research group found four kinds of splice variants, which were found to be homologous to NST1 of Thistle of Asteraceae by comparison. They were thusly named ApNST1, ApNST1.1, ApNST1.2 and ApNST1.3 and the function of variable splicing of the NST1 gene of A. purpurea was explored under abiotic stress. Transgenic tobacco seedlings and seedlings were treated to observe their phenotypes through analyses of seedling performance of ApNST1, ApNST1.1, ApNST1.2 and ApNST1.3 under stresses due to low temperature, drought, ABA and salt. We draw the following conclusions: ApNST1 seedlings had a certain degree of salt resistance, and the adult seedlings had a certain degree of low-temperature resistance. The adult seedlings and seedlings of ApNST1.1 showed obvious salt resistance. The adult seedlings of ApNST1.2 showed obvious low-temperature resistance, and both the seedling and the adult seedlings showed significant salt tolerance, but the root elongation at the seedling stage was large. ApNST1.3 seedlings showed significant salt tolerance, while adult seedlings showed no salt tolerance. The results of resistance at seedling stage and adult seedling stage were inconsistent. We deduced that the seedling stage was a process of nutrient accumulation, or the nutrient conditions given by the medium under aseptic conditions might not be conducive to its growth and could not respond in a timely fashion in the face of biological stress. Therefore, our conclusions were mainly based on the natural soil conditions, and were, namely, that ApNST1.1 has salt resistance and ApNST1.2 is resistant to low temperature. By observing the cross section of transgenic tobacco, it was found that the xylem of ApNST1 was significantly thickened, and the cell wall was also thickened but not significantly, mainly due to the increased number of xylem cells, indicating that NST1 played a role. In combination with the study of Liu et al. [3], we believe that NST1 is related to the formation of the plant secondary cell wall, and ABA phosphorylation of NST1 promotes the downstream reaction. The formation of secondary cell wall is conducive to plant growth, and overexpression of NST1 can achieve this effect. The characteristics of adult seedlings and seedlings were not completely consistent. Excluding the potential for slight errors in manual measurement, we assume that the functional expression of NST1 gene may be repeated by other genes, which will be explored and studied later [40]. The xylem of ApNST1.2 is obviously thinner, but it can respond to low-temperature stress. We suspect that xylem thickness varies in response to abiotic stress, especially temperature stress, and this idea remains to be investigated. In addition, the growth cycle of A. purpurea was long and its rooting was slow; thus, due to time restrictions, this study was unable to conduct abiotic stress treatments on the transgenic A. purpurea. Instead, transgenic tobacco was selected for this study. However, the observation and treatment of transgenic chrysanthemum are also being carried out. Conclusions In this study, we analyzed the amino acid sequence of the four splice variants from A. purpurea NST1 and found that ApNST1.1 had A5SS and ApNST1.2 had A3SS, and ApNST1 had the two splicing types. We verified the different resistance effects of the four splice variants in transgenic tobacco, and we concluded that ApNST1.1 has salt resistance and ApNST1.2 is resistant to low temperature. The cross-cut structure of transgenic tobacco stem was observed by paraffin section technology to determine the changes in xylem thickness and the reasons for the changes. We believe that the xylem thickness is related to the growth rate, and the xylem thickness of stems with a slow growth rate is obviously thicker, while the xylem thickness of stems with a fast growth rate is not, which is consistent with the change of stem height (Figure 3a,e). However, the relationship between the xylem thickness and non-stress treatment is still unknown, which will be something that we need to pay attention to in future research. In this study, the splice variants with obvious salt and cold resistance were selected, which enriched the germplasm resources of chrysanthemum. However, the mechanism of action in A. purpurea is still unclear, more time is needed to verify the role of these four splice variants in A. purpurea. In the future, we hope to investigate this and identify the most resistant A. purpurea. The objective of our research is to verify whether the obtained NST1 splice variants have a resistance function and to identify resistant materials. This study initially investigated which splice variants plays a role in resistance functions. Our long-term goal is to improve the adaptability of A. purpurea to allow its ornamental application in many areas. The functional identification of transgenic tobacco in this paper has laid a foundation for our long-term goal, and provided the possibility for obtaining clear resistance to stressors in A. purpurea. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/horticulturae9080916/s1, Figure S1: Relative expression of transgenic tobacco; Table S1: List of the primers used in the analyses of gene expression of NST1 by qRT-PCR.
7,487.2
2023-08-10T00:00:00.000
[ "Biology", "Environmental Science" ]
SINS/CNS/GNSS Integrated Navigation Based on an Improved Federated Sage–Husa Adaptive Filter Among the methods of the multi-source navigation filter, as a distributed method, the federated filter has a small calculation amount with Gaussian state noise, and it is easy to achieve global optimization. However, when the state noise is time-varying or its initial estimation is not accurate, there will be a big difference with the true value in the result of the federated filter. For the systems with time-varying noise, adaptive filter is widely used for its remarkable advantages. Therefore, this paper proposes a federated Sage–Husa adaptive filter for multi-source navigation systems with time-varying or mis-estimated state noise. Because both the federated and the adaptive principles are different in updating the covariance of the state noise, it is required to weight the two updating methods to obtain a combined method with stability and adaptability. In addition, according to the characteristics of the system, the weighting coefficient is formed by the exponential function. This federated adaptive filter is applied to the SINS/CNS/GNSS integrated navigation, and the simulation results show that this method is effective. Introduction With the advancement of the navigation and the technology of information fusion, the multi-source navigation [1] has become the main composition of the integrated navigation with high precision and reliability. In practical applications, due to the geographical location, equipment failure and radio interfered, some navigation modes will not work, but other undisturbed navigation modes will continue to operate, enabling the multi-source navigation to continue navigating for a long time. Through the detection [2] and correlation [3] of the data, information fusion can improve the accuracy of state estimation. In addition, in the field of navigation, the information fusion technology can be used to solve the problem of the low accuracy of a single navigation source in the multi-source navigation [4]. Therefore, the information fusion technology of multi-source navigation is the key to navigation operations. For the problem of multi-source information fusion, Carlson proposed the federated filter, which can use the information distribution principle to eliminate the correlation of each sub-state estimation. The distributed principle makes the calculation smaller and more fault-tolerant, and global optimal or sub-optimal estimates can be obtained through effective fusion, which makes the federated filter widely used [5]. The federated filter can be composed of one main filter and several local filters, the main filter and the local filters have the same state equation, and the measurement equations of the local filters differ according to the measurement information. In the traditional federated Kalman filter algorithm, Introduction of the Federated Kalman Filter When the navigation process involves three or more navigation methods, it is difficult to combine the measurement information of each method effectively by using a single filter. For this situation, the researchers have proposed a number of distributed filter methods. The standard distributed algorithm [9] was proposed, which is intended to establish the relationship between the distributed and centralized filter; considering the unknown correlation of local estimations, there is the covariance crossover algorithm [10] as well as the federated algorithm [11]. Federated Kalman filter is a special form of distributed Kalman filter and it was proposed by Carlson in the United States in 1998. It consists of several local filters and one main filter, and it is a decentralized filter method with block estimation and a two-step cascade. It assigns dynamic state and observation information to each local filter and each local filter operates separately. The results of local filters are combined according to the information distribution factors to obtain the result of the global filter. Obviously, the key of the operation lies in the information distribution process. Principle and Structure of the Federated Kalman Filter The federated filter operation process utilizes the measurement information of each subsystem and the common reference system for parallel independent operations. Suppose that there are N local filters, the subscript of the main filter is m, and the subscript of the global filter is g, the state and measurement equations of each local filter and the main filter are as follows: where X i,k is the state vector of the local filter or main filter, Z i,k is the measurement vector, Φ i,k−1 is the state transition matrix of the i th local filter at time k − 1; H i,k−1 is the measurement matrix; W i,k−1 and V i,k are the state noise matrix and measurement noise matrix of the local filter respectively, and they are all Gaussian white noise matrices, the variances are Q i,k−1 and R i,k respectively. It should be noted that the main filter has no measurement equation, i.e., when i = m,only the state equation works. Suppose that the local optimal estimationX i,k−1 and its corresponding covariance P i,k−1 are obtained at time k − 1, and these local optimal estimations are fused in the global filter according to the optimal fusion estimation algorithm to obtain the global optimal estimationX g,k−1 and its variance P g,k−1 . The state noise covariance matrices of the local filter and the global filter are Q i,k−1 and Q g,k−1 respectively, and P g,k−1 and Q g,k−1 are amplified by β −1 i times and then fed back to the local filters for parameter reset, i.e., the parameter value of k time is obtained: where β i is the information distribution factor. In addition, according to the principle of information conservation, the information distribution factor β i needs to satisfy: At the same time, the federated filter has the following principles of information distribution: Through the above equations, the federated filter links each local filter with the main filter, and realizes the fusion process through information distribution, and different federated modes can be obtained by setting different information distribution factor β i [12]. The improved federated filter reset method proposed in this paper uses Equations (2) and (4) to complete the information fusion process through the allocation and addition of global filter and local filter without the participation of the main filter. For the integrated navigation of SINS, CNS and GNSS in this paper, two local filters are set-SINS/CNS local filter 1 and SINS/GNSS local filter 2, each of which is independent in data processing. As for the setting of the main filter, it is necessary to consider the actuality of the system. For this system, in the case that the initial state noise estimation is not accurate or the state noise is time-varying, the main filter is not accurate without the measurement equation, so the main filter can be left. The data of each navigation subsystem is input to the corresponding local filter, and the output is the result of information fusion, and the global filter result can be obtained, then the global state estimation is realized. The structure of the federated filter is as Figure 1: As can be seen from Figure 1, on the one hand, the information from the global filter is output to the outside, and, on the other hand, it is fed back to each sub-filter. The existence of the feedback process makes the information fusion process of the distributed filter more efficient and accurate. Algorithm Flow of the Federated Kalman Filter For the federated filter structure without the main filter (i.e., β 1 + β 2 + · · · + β N = 1 ), parameters and their changes of the local filter affect the result of the global filter [13]. Taking the discrete model in Equation (1) as an example, the steps of the federated filter algorithm are mainly as follows: a. Initialization: Firstly, global estimation initialization is performed, and the initial value of the state vectorX g,0 , the initial value of the state covariance P g,0 , and the initial value of the state noise Q g,0 are known. b. Information distribution (reset): Secondly, the information distribution process is as follows: In this process, the value of β i affects the proportion of each local filter, and the principles of subsystems are not the same as each other. The specific selection principle is described in Section 4.1: c. Local estimation: The state prediction:X The variance prediction: The variance is updated: The state measurement is updated: d. Global integration: The variance fusion: The state fusion:X g,k = P g,k After each round of the filter calculation process, it will return to the information distribution (reset) link to start the next round of calculation. Introduction of the Sage-Husa Adaptive Filter The Sage-Husa algorithm is an adaptive filter algorithm based on the statistical characteristics of the system [14]. For the case that the statistical properties of the state and measurement noise are unknown, the maximal posterior estimation principle can be used to obtain the estimated value [15] to improve the filter accuracy. The estimation algorithm is suitable for general linear time-varying systems. The recursive calculation process is simple and suitable for many fields. Consider the mathematical model of the linear discrete systems: where Φ k−1 is the state transition matrix; H k−1 is the measurement matrix; W k−1 and V k are the state noise matrix and the measurement noise matrix, and the covariance matrices are Q k−1 and R k , respectively, and their statistical properties are unknown. For the systems where the variance W k of measurement noise is time-varying or unknown, the general Kalman filter algorithm is difficult to meet the accuracy requirements of the system due to the lack of updating procedures for the system and measurement noise. From the aspect of optimizing the filter performance, the contribution rate of the new data to the filter can be correspondingly improved, so the operator d k is needed, satisfying where b is the forgetting factor, and 0 < b < 1. The corresponding iterative factor's updating process is as follows:q whereq k andr k are the estimates of the mathematical expectation of the system error and measurement error at time k, respectively.Q k andR k are the estimates of the variance of the system error and measurement error at time k, respectively. Combining the above iterative factors with the Kalman filter algorithm, a robust adaptive Kalman filter algorithm which can automatically track noise can be obtained as follows: The one-step prediction equation: The mean square error of the one-step prediction: The gain of the filter: The estimation of the mean square error: The state estimation:X By adjusting the forgetting factor b, the adaptive process of the system can be fulfilled. Selection of the Federated Filter Information Distribution Factors It is known that the structure and parameter updating process of federated filter is closely related to the selection of information distribution factor β i [16]. Therefore, it is necessary to select the appropriate β i according to the characteristics of the system to achieve better filter effect. In the present literature, the selection methods of β i are mainly divided into two types, one is based on the fixed ratio [17], which is suitable for the process without dynamic changes or the proportion of state covariance remains unchanged. For example, when the parameters of each local filter are the same, the distribution can be set as β i = 1 N . The other method is used for the case in which the relevant parameters of the subsystem change with time. In this time, the dynamic adaptive method can be used to select the information distribution factor [18]. The distribution methods are mainly divided into several types: (1) According to the trace of the P i matrix [19,20]: Let The information distribution factor can be obtained by estimating the state vector covariance matrix P i . (2) According to the F norm of the P matrix [21]: Since the parameters of the local filters are not the same and it cannot guarantee that the parameter weight remains unchanged, it is necessary to select an information distribution factor with dynamic adaptive ability. Considering the computational complexity of these algorithms, this paper chooses Equation (25) as the solution algorithm of β i . Selection of Federated Adaptive Filter's Partition Coefficient and Its Feasibility Analysis In the actual situations, the statistical properties of the state noise are often difficult to determine, and the inaccurate state noise covariance will affect the accuracy of the filter. Therefore, in the framework of the federated filter, the simplified Sage-Husa adaptive filter [22] can be chosen as the algorithm of the local filter, thus an improved federated adaptive filter algorithm can be proposed. The traditional federated Kalman filter does not have the ability to eliminate the influence of deviation. For the state noise covariance, after that, the initial value Q g,0 is given, the iterative process at each moment simply re-updates the value of Q g,0 according to the information distribution factor. When there is a deviation in the initial value, the deviation will always exist in the filter process, which will affect the filter result. Assume that where Q 0 is the true value of the initial state noise, ∆Q 0 is the deviation between the true value and the estimated value. Due to the existence of ∆Q 0 , the filter effect of the traditional federated Kalman filter is difficult to guarantee. When the Sage-Husa adaptive filter is selected by local filter, the influence of the initial deviation on the filter is gradually weakened due to the update ofQ i,k , which makes the filter more adaptable. In fact, the measurement noise of the system is related to the accuracy of the measuring instrument, the distance and the angle of the target. In this paper, it is assumed that the statistical properties of the measurement noise are known, and the simplified Sage-Husa adaptive algorithm can be obtained by using statistical characteristics of state noise [23]. During the operation of federated adaptive filter, the iterative process of federated filter continuously updatesX g,k , P g,k , and Q g,k through Equations (2) and (4), while adaptive filter updateŝ q i,k andQ i,k through Equations (16) and (17). Since there may be a deviation in the initial value of the state noise covariance, it is considered to combine the federated updating principle with the adaptive principle, and use the combined federated adaptive principle to update the covariance of the state noise. For each local filter, it is assumed that there are two updating methods-the federated principle and the adaptive principle method, which are as follows: whereQ 1 i,k+1 andQ 2 i,k+1 are the state noise covariance estimations of the ith filter at k + 1 moment by using the federated algorithm and the adaptive algorithm, respectively. It is known that the updating process of the federated principle is related to the initial value. When the initial value is accurate or it is Gaussian white noise, it can use the information distribution factor to obtain the optimal solution globally; in addition, for the system with inaccurate or time-varying value, the adaptive updating process can adjust the adaptive degree of the filter by selecting the operator d k [24], and it is related to the forgetting factor b. In the operation of improved federated adaptive filter, the proportion of adaptive algorithm distribution increases with the change of state noise. Consider weighting the two update processes to get the following equation: According to the variation characteristics of the state noise, the proportion of α in the equation should decrease, and the federated adaptive filter should always satisfy 0 < α < 1. In the first quadrant, the changes of the linear function do not satisfy the above conditions, and the inverse proportional function, the transformed exponential function and logarithmic function can satisfy the conditions. In this paper, the transformed exponential function is selected as the changing function of the weight, that is, where α k is the weighting ratio of the federated method at k time; σ > 0, σ is chosen to control the rate of the change of α. The mean square error (MSE) of state noise satisfies where bias (Q) is the deviation of state noise, var (Q) is the variance. There will be a deviation in the setting of the initial value according to the federated principle, and the result of the adaptive filter will have a large variance when the number of samples is small. Therefore, the deviation of the state noise variance is mainly from the federated updating method, and the variance mainly comes from the adaptive updating method. For the sake of convenience, according to the variation characteristics of the weight, the initial variance ofQ 1 i,k+1 in the federated algorithm is set to 0, and the initial deviation ofQ 2 i,k+1 in the adaptive algorithm is set to 0. Thus, the mean square error of the state noise variance estimation of the federated adaptive filter at k + 1 time should satisfy the following equation: After analysis, it can be seen that bias Q 1 i,k+1 remains unchanged and it exists at the initial time,var Q 1 i,k+1 = 0; while var Q 2 i,k+1 has a large value in the initial time due to the few samples, and it gradually decreases with the number of the samples increases, and bias Q 2 i,k+1 = 0. Thus, Equation (33) can be changed as: Feasibility Analysis of the Federated Adaptive Filter's Partition Coefficient According to Equation (34), in the updating process ofQ i,k+1 by the federated adaptive algorithm, MSE Q i,k+1 consists of two parts, and bias Q 1 i,k+1 remains invariant after the initial value is determined. Therefore, it is necessary to ensure that var Q 2 i,k+1 decreases with time, thus the feasibility and superiority of the algorithm are guaranteed. For Equation (34), assume that To make var Simplified: It can be seen from Equation (15) that the operator d k can be controlled by selecting the forgetting factor b, so the federated adaptive algorithm is feasible under the conditions of Equation (38). The Sage-Husa adaptive filter has a small sample size at the initial time, and the estimated state noise variance has a large variance. At this time, if the value of the forgetting factor b is increased, the adaptive convergence will slow down. Therefore, the integrated method can guarantee the convergence speed as well as the estimation accuracy. The dynamic information distribution of federated adaptive filter is completed by using the exponential function as the weighting algorithm. In summary, it is assumed thatQ k is the state noise variance estimation at k time of the federated adaptive algorithm, the algorithm flow of the federated adaptive filter is as follows: Through the operation flow shown in Figure 2, a federated adaptive algorithm can be obtained, which is adaptive and stable to meet the requirements of the multi-source system navigation with unknown state noise characteristics. Figure 2. The algorithm flow of the federated adaptive filter. SINS/CNS/GNSS Integrated Navigation Model 1. ENU geography coordinate system(t): The origin of the coordinate system is the center of the carrier, the x t axis points eastward along the direction of the reference ellipsoid ring, the y t axis points north along the direction of the reference ellipsoid meridian, and the z t axis is determined by the right-hand rule. 2. Aircraft body coordinate system(b): Taking the satellite as an example, the body coordinate system is a coordinate system fixed on the satellite body. The coordinate origin is the satellite centroid, and the x b axis, y b axis and z b axis are usually defined on the satellite's inertia main axis. 3. Navigation coordinate system(n): The navigation coordinate system is the coordinate system selected according to the needs of solving the navigation parameters. This paper selects SINS, CNS and GNSS as the three basic navigation methods. By using the high-precision attitude information provided by CNS and the position as well as the velocity information provided by GNSS, the local filters use the Sage-Husa adaptive filter to estimate the position, velocity and attitude errors of SINS accurately, and correct the inertial device error of the SINS. Finally, the system will achieve continuous high-precision navigation of the aircraft. As shown in Figure 1, in this paper, there is no main filter; two local filters are used to implement the federated filter. They are SINS/CNS local filter 1 and SINS/GNSS local filter 2, respectively. The ENU coordinate system is used as the reference coordinate system, the flight height is assumed as h, and the earth is assumed as a spheroid. The State Equation of the Integrated Navigation System The state equation of the SINS/CNS/GNSS integrated navigation system consists of the error equations of SINS and the inertial devices, in the form of Take the state parameter of the system as 15 dimensions, and record it as: where φ E φ N φ U denotes the three mathematical platform angles error; δv E δv N δv U denotes the velocity error on three axes; δL δλ δh denotes the latitude, longitude and height error; ε x ε y ε z and ∇ x ∇ y ∇ z are the gyro random constant drift and the accelerometer random constant drift. The state noise consists of the random error of the gyroscope and the accelerometer. The expression is State noise transformation matrix is: where C n b denotes the rotation matrix of the aircraft body coordinate system to the navigation coordinate system. The Measurement Equation of the Integrated Navigation System It is known that the federated adaptive filter of the integrated navigation system contains two local filters, and the ENU geography coordinate system is selected as the navigation coordinate system. The SINS/CNS subsystem uses the transformed mathematical platform angles error as the measurement vector of the Sage-Husa adaptive filter. The measurement equation is where Z 1,k denotes the measurement vector, , V 1 denotes the difference between the star sensor and the gyroscope drift error. The SINS/GNSS subsystem uses the difference between the position and velocity of SINS and GNSS as the measurement information of adaptive filter. The measurement equation is where denotes the speed difference between the SINS and GNSS in the three directions; 2 denotes the position difference between the SINS and GNSS in the three directions. Simulation and Analysis Assume that the trajectory of the aircraft is shown in Figure 3: Initial state noise covariance estimation is unbiased, which is Q = diag[w 2 g , w 2 g , w 2 g , w 2 a , w 2 a , w 2 a ], and w g = 0.5π/180/3600, w a = 50 · 10 −6 g, where g is the acceleration of gravity; the initial position of the aircraft is 116 • of east longitude, 39 • of north latitude; the shooting angle is 90 • ; the thrust acceleration is 40 m/s 2 at the first 60 s; in the launch inertial system, the initial pitching angle is 90 • and remains the same during the first 10 s, then it changes from 90 • to 30 • in the form of quadratic function during the next 50 s, and then it remains the same during the rest of the time; in addition, the heading angle and rolling angle are both 0 • throughout the whole process; the simulation time is 1110 s, the sampling interval is 0.01 s, and 50 Monte Carlo simulations are performed. (1) Gaussian state noise and the estimation are unbiased: The condition setting with Gaussian state noise and unbiased estimation is the same as the basic simulation conditions above. Taking the average of the errors, the improved federated Sage-Husa It can be seen from the Figures 4-6 that there are almost no differences in the navigation errors of the three methods in the three directions. The following table is a quantitative analysis. It can be seen from the Tables 1 and 2 that the navigation errors of the three methods in three directions are almost the same, and the subtle differences are too small to be noticed, that is, when the state noise is Gaussian and the estimation is unbiased, the three methods are roughly the same. (2) Gaussian state noise and the estimation are biased: The settings of the parameters are same as those in Tabel (1), and the initial estimation of state error covariance is Q = Q 10 . It can be seen from the above Figures 7-9 and the Tables 3 and 4 that, when the estimation of the state noise is deviated, even if the state noise is Gaussian, the filter effects of the three methods are different. In the comparison of position and velocity errors, the improved federated adaptive filtering is the best, followed by the federated adaptive filter, the federated filter is not effective because it depends on the initial value of the state noise. (3) Non-Gaussian state noise and the estimation are biased: In the test (2), the setting of the parameters is added as follows: The SINS gyro random constant drift is 0.2 • /h, the accelerometer's random offset is 50 µg, and the initial misalignment angle is The above simulation is performed under the condition that the state noise is non-Gaussian and the estimation is biased, the tables are obtained in the case of using the federated Kalman filter, the federated adaptive filter and the improved federated adaptive filter to compare speed with position error in three directions. It can be seen from Figures 10-12 that, in the initial time, the three methods have large fluctuations owing to too few samples. As the number of samples increases, the three methods get stable gradually. In addition, when the number of samples increases to a certain extent, the advantages of improved federated adaptive filter gradually appear, which is the best among the three methods, while the federated adaptive method is the second, and the federated Kalman filter is the worst. The error statistics in three directions are shown in Tables 5 and 6. It can be seen from the comparison of the position and velocity errors that in the integrated navigation process, the effect of the improved federated adaptive filter is better than the other two methods in the three directions. (4) Time-varying state noise and the estimation are biased: Let the constant offset of the gyroscope in test (3) be set to 0, and it increases to 0.2 • /h with time. The random offset of the accelerometer is set to 0 at the beginning, and it evenly increases to 50 µg with time. The comparison of the three methods in three directions is as Figures 13-15: It can be seen from the above Figures 13-15 and the Tables 7 and 8 that, when the state noise is time-varying, the filter effect of the three methods is similar to the case of the non-Gaussian state noise. Improved federated adaptive filter has the best effect of the position and velocity error, followed by federated adaptive filter, while the federated Kalman filter is the worst. Comparing the improved federated adaptive filter and federated filter in different situations, comparison of position error under the conditions of test (1) and test (4) can be taken as an example, and the precision changes of the two filters in E-N-U directions are shown in Table 9. "+" means the precision is improved, while "−" means the accuracy is reduced. It can be seen from Table 9 that the precision of the improved federated adaptive filter has few changes for different conditions of state noise, while the federated filter's precision decreases significantly, which shows that the improved federated adaptive filter has little dependence on the initial noise estimation, but the federated filter depends more. Therefore, to sum up the above four cases, it can be seen that the improved federated adaptive filter algorithm can perform operations based on the state noise with unknown characteristics, and its filter accuracy is higher than the other two methods. However, since the filter algorithm designed in this paper improves the estimation of the statistical characteristics of the unknown state noise, the difference of the velocity error between the three methods is not as obvious as the position error, and it is related to the system characteristics. In summary, for different systems, the weighting mode and weighting function should be selected according to the characteristics of the system to obtain the optimal result of the federated adaptive filter. Conclusions In this paper, a filter algorithm based on the federated filter and simplified Sage-Husa adaptive filter is proposed for systems with time-varying state noise and biased estimation. The algorithm uses federated filter as the framework of the multi-source integrated navigation, and the local filters choose the improved Sage-Husa adaptive filter as the algorithm. In the updating process of the parameters, the federated and the adaptive principle are combined, and the exponential function is used to characterize the weighting value changes of the two updating principles, so as to obtain an improved federated adaptive algorithm with dynamic adaptive ability. Through the theoretical analysis and simulations of the improved federated adaptive algorithm, it can be seen that, when the number of samples is sufficient, the filter will tend to be stable and convergent. Compared with the federated Kalman filter and the common federated adaptive filter, the accuracy of this improved method is the highest. It shows that the improved federated Sage-Husa adaptive filter is effective in improving the federated algorithm, and it can weaken the influence of the initial estimation error of the state noise to some extent and improve the navigation accuracy. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations SINS strapdown inertial navigation system CNS celestial navigation system GNSS global navigation satellite system MSE mean square error var variance
6,907.6
2019-09-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Security of six-state quantum key distribution protocol with threshold detectors The security of quantum key distribution (QKD) is established by a security proof, and the security proof puts some assumptions on the devices consisting of a QKD system. Among such assumptions, security proofs of the six-state protocol assume the use of photon number resolving (PNR) detector, and as a result the bit error rate threshold for secure key generation for the six-state protocol is higher than that for the BB84 protocol. Unfortunately, however, this type of detector is demanding in terms of technological level compared to the standard threshold detector, and removing the necessity of such a detector enhances the feasibility of the implementation of the six-state protocol. Here, we develop the security proof for the six-state protocol and show that we can use the threshold detector for the six-state protocol. Importantly, the bit error rate threshold for the key generation for the six-state protocol (12.611%) remains almost the same as the one (12.619%) that is derived from the existing security proofs assuming the use of PNR detectors. This clearly demonstrates feasibility of the six-state protocol with practical devices. Quantum key distribution (QKD) allows legitimated users to securely communicate, and the security of QKD, especially qubit-based QKD, has been well studied so far [1].Since we have to assume any possible attack when we consider the security, the assumption of qubit-detection must be confirmed or at least its fraction must be estimated with the use of photon number resolving detectors, detector decoy idea [2], or estimation method via monitoring the double click event [3], all of which require some modifications to QKD protocols. Another approach for the security proof of QKD with threshold detectors is to consider the so-called squash operator [4] which squashes an optical mode down to a qubit state.This approach only requires to assign the double-click event (detectors "0" and "1" simultaneously click) to a random bit value, which is reasonable [5].The existence of the squash operator for BB84-type measurement has been proven [6,7], i.e., the statistics of the outcomes of the BB84 measurement can be interpreted as if it stemmed from the BB84 measurement on qubits whatever optical signal Bob actually receives. One might think that the squash operator should exist for any measurement with two outcomes, including the measurement of the six-state protocol [8], where we perform measurements along a basis, Y basis, in addition to X and Z bases in BB84.In the case of the qubit-based six-state protocol, the measurement along the extra basis lets us learn more about Eve's information gain, resulting in a higher bit error rate threshold than that of BB84, which is a main advantage of the qubit-based six-state protocol over BB84.Unfortunately, it turns out that the squash operator for the six-state protocol is proven not to exist [7], and it is unknown whether the advantage still holds with the use of threshold detectors. Intuitively, sending more than one-photon is not useful for the eavesdropping since it may only increase the bit error rate, and it is hard to imagine that the advantage of the qubit-based six-state protocol suddenly vanishes once we lose information about which signal is a singlephoton.In other words, to consider the security of the six-state protocol with threshold detectors is to consider the robustness of a qubit-based QKD protocol even if there is no squash operator.This is indeed one of the essential features that any practical qubit-based QKD must possess, and this issue must be seriously taken into account for the design of a qubit-based QKD protocol. In this letter, we prove the robustness of the six-state protocol by showing the bit error rate threshold remains almost the same (12.611%)compared to the one of the qubit-based six-state protocol (12.619%).This result shows that sending multiple photons hardly helps Eve, which confirms the intuition mentioned above.The rate is clearly larger than the rate of BB84 with threshold detectors (11.002%) [6,7], and this demonstrates the advantage of using two additional states in the practical situation.We remark that our work assumes the use of a single-photon as the information carrier, but we can trivially accommodate the use of an attenuated laser source by GLLP idea [4]. This letter is organized as follows.We start with a brief description of how the protocol works, and then we move on to relatively long outlining the proof, and we devote the rest of the paper to a more detailed explanation.Finally, we summarize this letter. Since polarization state of a single-photon and the 1 2spin state are mathematically equivalent, we use 1 2 -spin notation for the explanation in this letter.In the sixstate protocol, Alice first generates a random bit value b = −1, 1 and choose one basis α randomly out of three bases X, Y , and Z.Then, she sends over a quantum channel a qubit with state being |α b that is the eigen state of α basis of 1 2 -spin whose eigen value is b/2.Bob randomly chooses one basis randomly out of the three bases, and he measures the spin along the chosen di-rection.Alice and Bob compare over a public channel the bases they used, and keep the bit value if the bases match, othrewise discard it.Alice and Bob repeat this step many times, and they apply bit error correction [9] and privacy amplification [9] to the resulting bit string (sifted key), and they share the key. Next, we outline our proof.Our proof employs the security proof based on complementarity scenario proposed by Koashi [10].In this proof, we consider two protocols, one is the actual protocol which Alice and Bob actually conduct, and the other one is a virtual protocol.Let us assume that Alice has a qubit state, which may be fictitious, and let Z basis be Alice's key generating qubit basis.The goal of the actual protocol is that Bob agrees on Alice's bit values along Z basis.On the other hand, the goal of the virtual protocol is to create an eigen state of an observable X, which is conjugate to Z, with the help of Alice and Bob's arbitrary quantum operations that commute with Alice's key generating measurement.It is proven that if Alice and Bob are free to choose which protocol to execute after the actual classical and quantum communication and if they can accomplish its goal whichever choice they have made, then unconditionally secure key can be distilled. In order to define Alice's qubit in the six-state protocol, suppose that Alice first prepares a qubit pair in the state [11] (we choose this singlet state to fully make use of its symmetry later), measures one of the qubit by X, Y , or Z-basis, and sends the other qubit to Bob.Since this process outputs the exactly the same state as the one of the actual protocol, we are allowed to work on this scenario without losing any generality.In the case that we consider the security of the key generated along Z basis, and once Alice and Bob can generate |X 1 state in Alice's side in the virtual protocol then we are done since the agreement on the bit value in the actual protocol can be trivially made via classical error correction over a public channel (the syndrome is either encrypted [12] or not [13]). For the generation of |X 1 state, an important quantity is the so-called phase error rate, which is the ratio that Bob's estimation of Alice's bit string in X-basis results in erroneous, and if the estimation of the phase error rate is exponentially reliable then Alice can generate |X 1 by random hashing along X-basis [12][13][14].More precisely, the key generation rate G, assuming a perfect bit error correcting code, can be expressed as Here, n sif is the empirical probability of having the sifted key, H(X) is Shannon entropy of the bit error, and H(Z|X) is Shannon entropy of the phase error conditional on the bit error pattern.In other words, n sif H(X) is the number of the hashing along Z-basis needed for the agreement of the bit values in the actual protocol and n sif H(Z|X) is the one along X-basis needed for the generation of the X-basis eigen state in the virtual protocol [10].Hence, the key for the improvement in the key generation rate is how to minimize the conditional entropy H(Z|X). For the estimation, we assume without loss of generality that states received by Bob are classical mixtures of photon number eigen states, and let P N be the probability of receiving a state having N photons [15].Since we have no direct access to P N , we have to assume the worst case scenario where Eve maximizes the induced phase error rate by classically mixing up each photon number state and sending them to Bob.As we will see later, it can be proven that states with photon number being greater than 3 induces too much bit errors and we can neglect those states for the analysis.Hence, we can concentrate only on N = 1, 2, 3 cases, and especially we want to derive the corresponding mutual information between the bit and phase errors. To compute the mutual information, we introduce Bob's qubit by employing the BB84 squash operator, and we have to estimate what statistics we would have obtained if we had performed the measurement along Ỹ basis onto the resulting qubit (here, "tilde" means that this is about a qubit space and fictitous).In general, the actual Bob's measurement along Y basis does not coincide with the measurement along Ỹ basis, however they do only when N = 1, 2 thanks to the existence of the squash operator for the six-state protocol [7].This gives the same mutual information for N = 1, 2 as the one of the qubit-based six-state protocol.We note that to employ BB84 squash, we have to randomly pick up two bases (for the explanation, we assume that we have chosen X and Z bases) out of the three bases in the actual protocol.This random choice does not change the actual protocol at all.The reason is that we can always split the basis choice into two steps: the first one is the choice of two bases out of the three and then one basis is chosen from the two.Moreover, we assume in the actual protocol that Alice and Bob perform joint random bit-flip operation to make the analysis simpler. To analyze N = 3 case, we use the symmetry of the density operator.As a result, we can estimate the mutual information.Finally, by mixing up the photon number state N = 1, 2, 3 based on the worst case scenario, we show that the bit error rate threshold for the six-state protocol with threshold detectors is 12.611%.This is the end of outlining the proof, and we explain why N ≥ 3 can be neglected and the derivation of the bit error rate threshold in what follows, in which we take the asymptotic limit such that the number of the pulses is infinite and we neglect statistical fluctuations. Our goal is to minimize H(Z|X), and observe that this quantity can be rewritten as the convex combination of the conditional Shannon entropy H(Z|X) = ∞ N =1 P N H(Z|X) (N ) , where H(Z|X) (N ) is the conditional Shannon entropy that is derived from N -photon detection event by Bob.Imagine that we make a twodimensional (2D) plot of H(Z|X) (N ) as a function of the bit error rate e b .The convex combination suggests that we have to consider a convex hull, each of whose extreme points corresponds to e b , H(Z|X) (N ) in the 2D plane.Thanks to the existence of the squash operator ) and H(Z|X) (1,2) .We can neglect any point in the gray-filled regime for the security. for the six-state protocol [7], the plot of H(Z|X) (1,2) ≡ H(Z|X) (N ) for N = 1, 2 is the same as the one of the qubit-based six-state protocol [8], which is expressed as Here, h(x , and H(Z|X) (1,2) is depicted in Fig. 1 Note that H(Z|X) (N ) for any N ≥ 3 can never be larger than h(e b ) (dotted line) as we use the squash operator for BB84.Also note that the dotted and dashed lines are concave, and an achievable point can be generated by the convex combination of a point along the dashed line and a point below the dotted line such that the average bit error rate coincides with the observed error rate.Suppose that we take convex combination of a point along the dashed line whose bit error rate is lower than the bit error rate of B (12.619..%) and a point in the gray-filled region.Since this convex combination only decreases the mutual information, it follows that we neglect any photon number state whose minimum bit error rate is larger than the bit error rate of C (25.677...%).According to analysis in [16], it turns out that the minimum bit error rate is strictly larger than 25.677...% for N ≥ 4 (note that the minimum bit error rate is not zero for N ≥ 2 since only the singlet state (N = 1) has the symmetry that has the zero bit error rate).Thus, we are left with working only on N = 3 case. For the derivation of H(Z|X) (3) , we first consider symmetrization of the state ρ that is generated by {R α } where R α is π/2 rotation along α-basis (α = X, Y, Z) of a qubit state.Also note that any rotation of the state on H ⊥ , which is an orthogonal complement to H being spanned by {|α b ⊗4 }, does not change the measurement outcomes since the state on H ⊥ always induces double-click (one can also check this with POVM to be mentioned).Thus, we are allowed to work on the symmetrized density matrix ρ A bit tedious calculation with Shur's lemma gives us ρ [16], where r m ≥ 0 (m = 0, 1, 2, 3), and P 0,1,2 is a projector Here, the first (second) index in each ket represents Z component of Alice's (Bob's) 1 2 -spin (3 1 2 -spins with total angular momentum being 3/2) with eigen values being 1/2 and −1/2 (3/2, 1/2, −1/2, and −3/2). To calculate the mutual information, we consider what error rate (e ỹ ) we would have obtained if we had performed the measurement along Ỹ basis onto Alice and Bob's qubit, in which Bob's qubit is defined through the BB84 squash operator.Bob's POVM {M , and what we have to do is to derive e ỹ as a function of e b and to maximize H(Z|X) (3) .In the equation of e b and e ỹ , we erase the parameter r 3 by using the condition Trρ sym reads r 0 , r 1 , r 2 ≥ 0 and 3r 0 + 3r 1 + 2r 2 ≤ 1.By introducing a parameter set {t, s, u} with 0 ≤ t, u ≤ 1 and −1 ≤ s ≤ 1, we can express r 0 = ut(1 + s)/6, r 1 = ut(1 − s)/6, and r 2 = u(1 − t)/2, and we use this parameterization to derive the regime {e b , e ỹ} that ρ that coincides with H(Z|X) (1,2) when e b = e ỹ, and we note that the tangent in Fig. 1 crosses the shadow regime in Fig. 2 so that the bit error rate threshold should degrade.By considering the convex hull of H(Z|X) (N ) for N = 1, 2, 3, the upper bound of H(Z|X), which we express as H(Z|X), is given by (1,2) in case 0.115... > e b (2.82...) e b + 0.0976... in case 1 4 ≥ e b ≥ 0.115...This is also shown in Fig. 2. From this expression, we can derive the bit error rate threshold of 12.6112...% by solving H(Z|X) = 1 − h (e b ) with respect to e b .Remarks: For the first sight, our analysis assumes that Alice and Bob's pair states are identically and independently distributed.A way to treat unconditional security is to use the argument based on quantum de Finetti theorem [17] or Azuma's inequality [18,19].In the latter argument, we consider an arbitrary whole Alice and Bob's state, not just a pair state, and we consider to perform the Bell basis measurement from the first qubit pair in order.ρ sym is now interpreted as the state of a particular qubit pair conditional on arbitrary Bell basis measurement outcomes.It follows that e ỹ and e b are probability also being conditional on the outcomes, which is a property required in applying Azuma's inequality, and most importantly the relation between them are linear as we have already mentioned (for 4 ≤ N case, it is given by 0 ≤ e ỹ ≤ 1 and 0.25677... ≤ e b ).Thus, we can convert our analysis into the analysis of the unconditional security proof by using exactly the same argument as [19]. To summarize, we prove the unconditional security of the six-state protocol with threshold detectors.For the proof, we propose a technique to determine which photon number states are important, and we employ the squash operator for BB84 and the estimation of the mutual information that can be obtained via Y basis fictitious measurement on the resulting qubit state.In this letter we consider one-way quantum communication protocol, and our analysis may apply to two-way quantum communication protocol such as BBM92 type QKD [20], which we leave for the future study.Security proof of other protocols with threshold detectors are also another future works. as the dashed line, in which h(e b ) (dotted line), 1 − h(e b ) (dot-dashed line), and a tangent (solid line) are also plotted.The bit error rate of the intersection (A) of the dotted line and the dot-dashed line represents the bit error rate threshold of BB84, and the one (B) of the dot-dashed line and the dashed line represents the bit error rate threshold of the six-state protocol up to N = 2. C is the intersection of the dotted line and the tangent whose tangent point is B.
4,233.6
2010-08-27T00:00:00.000
[ "Computer Science", "Physics" ]
Negative Space Theory Jianping Xue (Wuxi 214000,Jiangsu,China<EMAIL_ADDRESS>Abstract: It is necessary to continue the research on the formation mechanism of gravitation, the formation mechanism of black hole, the formation mechanism of mass, and the mechanism of symmetry breaking. This study is dedicated to establish a new theory, and compare this new theory with the existing theory, testing self-consistency and rationality of this new theory. Based on the observed physical facts, the negative space theory is be put forward according to the principle of scale conservation and spatial discontinuity, that is, space expansion produces positive space and Huazi, and there is a trap field around the Huazi. Comparison results show that the negative space theory can better reveals the above physical mechanism, and change the previous understanding of these physical mechanism, open up a new research direction. Introduction Newton discovered the law of gravitation, but he didn't know why there is gravitation between mass, and what causes the gravitation. Einstein established the generalized relativistic gravitational field equation, that gravitation is the space-time bending due to the distribution of matter energy momentum, but the reason why matter cause space-time bending not be explained.For the black hole of the singularity, so far can not give a reasonable physical meaning. The quality energy equation established by Einstein, indicating that the quality and energy can be transformed from each other, but there is no mechanism to reveal how energy is converted into quality.Higgs proposed the Higgs field and its mechanism, that the matter quality is the result of the particle's action in the Higgs field, but the source of the Higgs field is not explained, only to say that the universe is full of the Higgs field everywhere. The first impetus of symmetry broken has not been found so far, and have to call it as "spontaneous".In addition, it is generally felt that the theory of general relativity and quantum mechanics is difficult to integrate.All these problems not only have been puzzling, but also hindered the further development of physics, which must be resolved. A large number of observations confirm that the space is so large that it can only be measured in light years as a unit of length. Although there is such a huge space exists, but it gives a feeling of empty, and is difficult to be studied.Even if Dirac think of vacuum as a energy sea, but only think it as a container, from where virtual pair particles haunt. From a large number of existing research results, a neglected key factor be found, that is the product of space expansion.Through the relationship between energy, quality and space , recognize the existence of negative space, then proposed the negative space theory. Negative space theory Astronomer Hubble found that there is a redshift in the cosmic galaxies, thus establishing Hubble's law and determining that the space is constantly expanding.This event reveals two characteristics: the space is expandable, and the expansion takes place inside the universe space. Define the original scaleR0, scale after expansionR1, expansion scale ΔR,then: Before space expansion, the original scale can be written as: R0=R0+0 (1) If space expand, the expansion scale ΔR is the difference between the scale after expansion and the original scale: Then the scale after expansion can be written as: Compare ( 1) and ( 2), the expansion scale ΔR corresponds to the0 scale, that is to say the expansion mode of space must be a kind of expansion which something born from nothing, so expansion increment ΔR is obtained from the space of0 scale. It can be learned that there is 0 volume space in the space, and the spatial increment can be obtained from this 0 volume space by expansion. Propose the following principles: Principle 1, the scale follows the principle of conservation, that is, the total scale remains unchanged when the scale change. Scale has length, area and volume, define the following dimensions: length L, positive length L + , negative length L -;area S, positive area S + , negative area S -; volume V, positive volume V + , negative volume V -. For a length region, its initial length L is 0, that is, there is no length or L = 0.When the region is generated or expands, that is to say , the positive length L + is generated, then: This indicates that when the length is generated, a negative length L -corresponding to the positive length L + will be obtained. For an area region, its initial area S is 0, that is, there is no area or S = 0.When the region is generated or expands, that is to say, the positive area S + is generated, then: This indicates that when the area is generated, a negative area S -corresponding to the positive area S + will be obtained. For a spatial region, its initial volume V is 0, that is, there is no volume or V = 0.When the volume is generated or the space is inflated, that is to say, the positive volume V + is generated,then: This indicates that when the space expands, a positive volume V + and a negative volume V - corresponding to the positive volume will be obtained. The above positive length L + , positive area S + and positive volume V + are real worlds that people can feel, and corresponding to them, there are negative length L -, negative area S - and a negative volume V -. Positive volume constitutes positive space, negative volume constitutes negative space. Principle 2, the space is discontinuous and quantized. Here the proof: Let O as a point on one dimension line, A point and B point is adjacent to O point from right and left, and the distance between them is: That is to say , A and B is continuous to one another. According to the continuous meaning, no matter in any case, one point can never be separated. But according to the results observed by Hubble, that space can expand internal, that is to say something can be born from nothing, so expansion can occur within O point , then: That mean A point, B point is not the same point, A and B can't be continuous ,else the ΔL cannot be obtained, then: That mean the space will be non-continuous space.From the form of spatial expansion, it can be seen that the positive space generated from the expansion of the 0 space points, and will make the space discontinuous.The non-continuous space will inevitably lead to the quantization of space, otherwise, non-quantized space must be a continuous space. According to Principle 1 and Principle 2, when the cosmic space expands (or explodes), it will produce the positive volume and the negative volume from 0 volumes, then the positive volume and the negative volume are separated, and the positive volume forms the positive space , the positive space constitutes the universe space, the negative volume forms the negative space and curled into a negative space particle with no-volume in positive space , this particle will be called as Huazi. Huazi is the basic unit of matter material. Around the Huazi, there is a trap field between the Huazi and the positive space due to the change from positive to negative property.And the trap field have a role in relevant physical mechanism.This is the negative space theory. Let's explain the reason of Hubble's redshift with this negative space theory. The photon's energy is decided by it's wavelength (or frequency), when redshift occur, that mean the photon's wavelength becomes long, this gives the evidence of universe space expansion, and it is considered that the expansion of space elongates the wave of photons. Wavelength becomes long, that mean photon loses energy, but energy loses to where?No reasonable answer for this until now. According to the negative space theory, the Huazi will be produced after the space expansion ,and it has a trap field.This trap field can absorb energy.When photon encounter this trap field, photon will lose energy, then photon's wavelength Increase, redshift occurs. This shows that the cosmology redshift is not caused by the space expansion to elongate the photon's wavelength, but due to the loss of photon's energy by the huazi's trap field.As the space is constantly expanding, the more distant the photons from us, the more trap field for photons encounter on the way to the earth, will cause more energy loss and produce a greater redshift.This result consistent with the description of Hubble's law. These lost energy will be converted into energy or dark energy and together with the Huazi to form matter or dark matter. The theory of the Big Bang needs to solve the problem of how the outer and inner parts of the universe expand.The existing theories have membrane theory, parallel cosmology theory or multiple cosmology theory, but the membrane theory needs to assume that the outer periphery of the membrane is free to be used for expansion, Cosmology or multiple cosmology can't explain the expansion of the universe within the source, so these theories are unreasonable.According to the negative space theory, space expansion can be generated from 0 volume of the space, that is, in the universe and at the edge of the space, there can be 0 volume space expansion, without the need for free space for space expansion.Negative space theory solves the problem of the origin of space expansion. It can be seen that the negative space theory can explain the phenomenon of cosmological redshift and spatial expansion reasonably, and has self-consistency. 3 Analysis for relevant physical events with negative space theory The transformation of mass and energy Define energy E, mass M, speed of light C. Einstein's energy equation is: From this equation it can be seen that the quality and energy have an equivalent relationship, which implies that the quality and energy can be transformed into each other. Product of the positive matter and antimatter annihilation reaction is energy, that is, the matter disappear, all become energy. Annihilation reaction of positive and negative electron, generates a pair of photons, therefore, to admit photon as a pure energy quantum is reasonable. The following analysis the conditions for energy into quality. According to the Big Bang theory, 10 -43 seconds after the Big Bang, the universe temperature is about 10 32 degrees, the universe appears from the quantum fluctuations in the background, 10 -35 seconds after the big bang, the universe temperature is about 10 27 degrees, gravitation separated, quarks, bosons, leptons formed, 10 -10 seconds after the Big Bang, the universe temperature is about 10 15 degrees, protons and neutrons formed, 0.01 seconds after the Big Bang, the universe temperature is about 100 billion degrees, photons, electrons, neutrinos as the main, proton neutrons accounted for only one of a billion, the rapid expansion of the system, the temperature and density continue to decline [1][2]. The Big Bang caused a dramatic expansion of space, a sharp decline in the temperature of the universe, and the energy was cooled into material, which is the way described in the Big Bang theory that the energy is changed into material particles.But this conclusion is not correct. First, the cosmic microwave background radiation shows that the universe background temperature is 3k now, this temperature has fallen hundreds of millions of times compared to the beginning of the universe.Human beings use physical information to explore billions or even ten billion light years distant galaxies, this physical information of these galaxies is in the form of energy quantum such as electromagnetic waves, and which has existed on the cold distance of "billion light years". According to the big bang theory, all energy quantum should has been cooled into material particles in very high temperature after the big bang, so should not be seen any energy quantum in so low temperature background now, but in fact, these energy quantum can still be found to run along the long distance without "cooling" frozen into the material to reach the Earth. Second, the sun continues to emit solar energy around it, and these solar energy is not cooled into substance in the 3k temperature background of the Universe, but these solar energy was frozen as substance in the sun before it emits energy. The third, positive electron and negative electron annihilation reactions produce a pair of observable pure quantum (photons), which also indicates that the energy quantum can not be frozen into matter particles in the 3k temperature background of the Universe now . In addition, there are a large number of energy fields in the universe, including energy fields such as electric fields, magnetic fields, gravitational fields, etc., and the presence of these fields also shows that the low temperature can not convert energy quantum into particles also. Therefore, the above facts show that the temperature drop do not convert energy into quality. As the energy quantum, obviously it will not automatically convert to quality by itself, must rely on external conditions.So, after excluding the temperature factor, the external conditions only have the space and the Higgs field, but whether the Higgs field can convert energy into quality will be discussed later. According the negative space theory, the spatial expansion produce the positive space and Huazi. From the relativistic quality formula and the energy equation, it is known that the energy quantum is usually run at the light speed, and the quality particles can not run at the light speed.Therefore, to convert the energy quantum into quality particles, it is necessary to change the state of the energy quantum, such as from the photon state to the electronic state, corresponding to its speed must also be reduced. At present, the temperature of the universe is relatively cold compared with the beginning of the universe, but the energy of light and electromagnetic waves is still in the "vacuum" in the speed of light transmission, from this objective fact, we can determine that the space has not the ability to transform energy into mass. So, after excluding the temperature factor and the positive space factor, the condition for energy quantum to be transformed into mass, there only the Huazi and the Higgs field. First analyze the role of the Huazi. There is a trap field around the Huazi, which has the opposite physical effect to the positive space.Thus, when the energy quantum reaches the Huazi, it's state will change by the action of the trap field, and it's velocity will be reduced, like as the effect of "brake".Then the energy quantum converts into basic unit of material -the Gouzi, and has mass, see Figure 1.Here, the basic unit that converted from energy quantum is called the Gouzi. In the process of this transformation, the "force" effect is exhibited.This force is generated by the "brake" action on the energy quantum by Huazi, so that the energy can be converted into mass.This force is the source of The force is the concomitant product when the energy quantum is converted into material particles. In 1923 the American physicist Compton in the study of graphite in the X-ray scattering of electrons found that some of the wavelengths of scattered waves are slightly larger than the wavelength of the incident wave, this phenomenon is called Compton effect. The experimental result is: (1) In the scattered light, in addition to the same spectral line as the original wavelength λ0, a spectral line of a new wavelength>λ0 is generated. (2) The amount of change in wavelength Δλ = λ -λ0 increases as the scattering angle φ (the angle between the scattering direction and the incident direction) increases. ( According to the law of conservation of energy and momentum: Where h is the Planck constant, ν0, ν is the frequency before and after the photon collision, m0, m is the static mass before and after the electron collision, and C is the speed of light. It can be seen from the equations (15) [3]of the Compton effect that the energy of a photon can be transmitted (divided) into h (ν0ν) , and one part of this divided energy h (ν0ν) transmitted to the electron becomes the momentum mVof electron, the other part this divided energy h (ν0ν) transmitted to the electron becomes mass (m-m0) of the electron. The Compton effect experiment results show that photons can be divided into smaller photons, which are separable and can be converted into partial static mass of electrons.If the Higgs field exists everywhere according to the Higgs mechanism, the photon should become a mass particle under the action of the Higgs field, instead of becoming a mass particle after encountering the electron. The Higgs mechanism is not true. The Compton effect shows that according to equation ( 16), the change in wavelength can be a continuous change with the scattering angle φ, that is, the energy of the photon can be divided into smaller parts, up to the inseparable smallest micro-photons.Micro-photons can also be converted into micro-mass, that is, a photon can be subdivided into micro-photons, and micro-photons corresponding to micro-particles, indicating that the basic particles in the existing standard particle model are not basic.The elementary particles can still be composed of more basic particles.Therefore, the existing elementary particle model must be modified. According to the negative space theory, due to the existence of electrons in the electronic, the potential of the potential field with absorption, conversion function.Thus, when a trap field is encountered, a single photon can be absorbed (divided) by a different amount of energy from the trap field with a different degree of action (incident angle φ) from the trap field and absorbed into the mass of electrons , In the transformation process to produce force, so that electrons gain momentum.This is in line with the Compton effect and its interpretation. According to the negative space theory, due to the presence of the Huazi in the electron, the trap field of the Huazi has the function of absorption and transformation.Therefore, when a single photon encounters this trap field, it can be absorbed (divided) with a different degree of action (incident angle φ) by the trap field, and is absorbed into the mass of the electron, generating forces during the converting process, allowing the electrons to acquire momentum.This is consistent with the Compton effect and its interpretation. It can be seen that the negative space theory can reasonably reveal the mechanism of energy conversion into mass. Source of quality British physicist P.W.Higgs proposed the Higgs mechanism.In this mechanism, the standard propagator and the fermion are given the mass by the Higgs field by acting with the Higgs field, which is the field quantization excitation of the Higgs field, which obtains the mass by self-interaction , All over the universe are filled with Higgs field. Let's examine the process for electronic particle obtain mass. According to the description of the Higgs mechanism, the electron first forms the electron shell, and it's mass is Me0 = 0, then the electronic shell acts in the Higgs field to obtain the electronic body's mass Me1, then: Electronic mass Me is: Me= Me0+ Me1 (17) The electron energy Ee is: It can be seen from the equation ( 17) that the formation of the electron particle shell requires a source, according to the theory of quantum field, the particle shell Me0 from the field excitation, the field is energy, so the shell has energy.At the same time, since the mass of the electronic body Me1 is given by the Higgs field (Higgs particle), according to the energy equation, the electronic body has no energy before the body quality Me1 is obtained. It is observed that a whole electron all exists in mass state and not in pure energy state. Therefore, the shell Me0 must be converted into no zero mass state, that is, Me0≠ 0, but the mass of the shell is not given by the Higgs field. This clearly shows that the Higgs mechanism is not true. It is generally believed that γ photon with energy greater than 1.02MeV pass by a nuclei, the γ photon will be transformed into an electron and a positron, which is the positive and negative electron pair effect ofγ photon and the typical energy conversion into matter.This phenomenon shows that the zero mass γ photon can be transformed into mass particles, secondly, the γ photon can occur a positive and negative electron pair effect only when it run close to the nucleus.In other words, the zero mass γ photon does not has a positive and negative electron pair effect in the spatial path before it encounters nucleus.According to the Higgs mechanism, the universe is full of Higgs field, which makes it possible for γ photon to transform into positive and negative electron before it encounters any nucleus.However, the fact proves that this is not the case, in addition, no experiments have confirmed that photon can be transformed into positive and negative electron in pure electromagnetic fields [4][5]. According to the negative space theory, it is the Huazi that converts energy into mass particles and gives them mass.Since the Huazi is present in the nucleus, when the high-energy γ photon encounters the nucleus or other particles which have Huazi, it will be affected by the trap field, then transform into a positive and negative electron. It is predicted that in the ordinary pure electromagnetic field, even if the energy value of the high-energy γ photon reaches to the mass value of positive and negative electron, it is impossible to produce positive and negative electron because there is no Huazi negative space, unless the energy field is high enough to stimulate the spatial expansion to produce negative space. The above logical arguments show that the Higgs mechanism is unreasonable and its role should be excluded. It is more reasonable for the negative space theory to explain these above facts. According to the negative space theory, it can will be predicted that in the area where the positive and negative electrons are annihilated, after the photon leaves, the Huazi will be left, and if there an energy quantum pass, there will occur new particles. Gravitational field equation Einstein established the gravitational field equation based on the relationship between matter and space [6], namely: Where Rμνis the second order curvature tensor, R is the curvature scalar, gμν is the gauge tensor, Tμν is the energy and momentum tensor of the material, G is the gravitational constant, and C is the speed of light. The left side of the equation ( 19) is the temporal and spatial geometry of the gravitational field, and the right is the energy momentum tensor of the material as the gravitational field source. In the Schwarzschild coordinates (ct, r, θ, φ) under the gravitational field equation, we can get Schwartz external solution, that is: , M is the mass. From the photon motion with zero space-time element ds= 0 characteristics, photon "radial" movement, there are The coordinate velocity V00 of the photon moving along the "radial" motion is: In ( 22), when r = 0 and r = rs, V00 result are singular, the latter r = rs can be eliminated by selecting a coordinate system, while the former is inherent in space-time and is intrinsic [7][8][9][10]. Obviously, it is difficult to understand the physical meaning of r = 0 before we realize the negative space, so we can only call it intrinsic.It can be known from the negative space theory that when r =0, it means that when the black hole formed, everything (at the radius of 0), is surrounded by the trapped field, and the external (positive space) is characterized by the trap field, that is, there has a strong negative space characteristic, as spatial singularity.That is to say, even if the positive space radius is 0, the energy can still be allowed to exist there, which is present in the trapped field of the negative space. According to Einstein's gravitational field equation, the gravitational force is the result of spatial distortion, which requires space to be a continuous space.Otherwise, the discontinuous space cannot be distorted, as it cannot twist a pile of sand. However, the existence of matter particles is space-consuming, that is to say, space cannot be continuous, otherwise material particles cannot occupy space.Or, if the space is continuous, when the material particles move, the space recovery speed is different in the three dimensional directions (1 moving direction, 2 perpendicular to the moving direction) for the space left by the particles.The void area with no space will appear, thus destroying the continuity of the space, thus indicating that the space must be discontinuous.Therefore, it is not correct to treat space as continuous.It is not true that Einstein's general relativity equation describe gravity as a spatial distortion effect.The conclusion that mass causes space bending is not true. According to the negative space theory, due to the presence of Huazi in the mass particles, the Huazi has a trap field.The effect of this field on the space around the mass particle is equivalent to the spatial distortion, which is the spatial distortion described in the gravitational field equation. Thus, Einstein's gravitational field equation is not a relational equation that determines the spatial characteristics by matter mass, but a spatial characteristic equation between Huazi and a positive space, that is to say, the trap field causes the space around the mass material to be distorted. Negative space theory reveals the essential source of gravitational field and is more rational in connection with the internal composition of mass particles. Black hole According to the black hole theory, the black hole is caused by the gravitational collapse.This argument does not hold. We examine the internal gravitational distribution of an ideal homogeneous solid sphere. On the surface of an ideal solid sphere, according to the law of universal gravitation, the gravitational field strength (gravity acceleration) per unit mass is: Where K is the coefficient and R is the radius of the sphere.Obviously, at the center of the solid sphere, the gravitational force is zero, and the gravitational force of each point in the ball increases linearly with the distance from the center of the sphere.However, in a certain scale of the sphere, the gravitation is not greater than the strength of the atom, and it does not tear the atom and collapse.That is, it is impossible to "tightly compress" the material to form a black hole by gravitation, and it is impossible to generate a central singularity. This shows that it is possible to find a black hole whose gravitational field strength at the Schwarz's radius is equal to the gravitational field strength (gravity acceleration) of the Earth's surface, but such gravitational field strength does not form a black hole on the earth, and the light can Escape, it can be said that the black hole does not exist. Therefore, using the general relativity equation to explain the cause of black holes will cause contradictions. According to the negative space theory, the cause of black holes can be explained in this way, that is, In greater matter, under interaction, the Huazis (negative space) in the material produce a chain reaction, which recombine to form large trap field.And constantly produce chain superposition effects to form black hole. The black hole's horizon is the trapping field and has a significant trapping effect on the photon, including trapping the photon in the trap field. It can be seen that the negative space theory is reasonable.And it can be speculated that there are random black holes in the universe, and there are even "starved" black holes that are composed of a combination of Huazi without quality. Symmetry breaking The symmetry breaking is a major basic subject in the theory of quantum mechanics. However, for the origin of symmetry breaking, the first impetus for breaking is still not found, and the word "spontaneous" has been added for this purpose.Negative space theory is reasonable. Conclusion According to the above analysis, it can be considered that the negative space exists objectively.It is the inevitable need and result of the expansion of the universe space.It has played a important role in the transformation of energy into matter, material mass formation, force generation, black hole formation and symmetry breaking. According to the negative space theory, the particle mass is not obtained by the Higgs mechanism, but is transformed from energy by the action of the Huazi.There is no Higgs field and Higgs particle.The black hole and the gravitational field are the result of the Huazi and its trap field, not the result of mass.The driving force of symmetry breaking is from the Huazi and its trapping field.The basic particles of matter are from the combination of the GHzi. Based on the negative space theory, the following predictions can be made: (1)The collision of a pair of high-energy photons does not produce positive and negative electrons. (2) The Higgs mechanism does not exist, the Higgs particle does not exist, and the so-called Higgs particle is just a new particle or a new combination particle. ( 3 ) The black holes in the universe are composed of Huazi.Therefore, there are countless black holes in the universe, there are black holes of various masses, and there is even a black hole whose quality does not correspond to the black hole horizon, that is, the "starved" black hole. ( 4 ) Gravitational waves are not the distortion of space.These predictions will test the correctness of negative space theory. Figure 1 Figure 1 energy quantum and negative space (Huazi), the Figure 2 Figure 3 Figure 4 Figure 2 Negative space (Huazi) acts on the energy quantum, ) For scattering materials of different elements, the amount of change Δλ of the wavelength is same at the same scattering angle.The intensity of the scattered light with a wavelength of λ decreases as the atomic number of the scattering materials' atoms increases. The Schwarzschild radius based on the general relativistic gravitational field equation has been used as the theoretical basis for studying the black hole.the gravitational constant, M is the mass of the celestial body (black hole), and C is the speed of light.Equation (27) shows that the mass of the black hole increases, the gravitational field strength at the Schwarzschild radius decreases, which is inconsistent with the greater the mass of the black hole and the stronger the gravitational field.According to formula (27), the same black hole mass M black has the gravitational field strength of the earth (or the sun) surface can be obtained, that is: Former( 3 ) Soviet physicist A.D. Sakharov pointed out that the dynamics of Barryogenes is must meet three conditions(1) There is an interaction that destroys the conservation of the number of baryons.Suppose the universe begins with a state where the number of baryons is zero.If there is no interaction that destroys the conservation of the number of baryons, then its baryon number will always remain zero.(2) The interaction of the constitutive transformation (C) invariance and the charge conjugate-space inversion joint transformation (CP) invariant.Only in this way, the number of particles and anti-particles may not be equal, and thus it is possible to cause the asymmetry of the number of baryons and the number of anti-baryons.Deviation from heat balance.If the universe is in thermal equilibrium, then the CPT theorem knows that the average of the number of baryons will remain zero.According to the previous discussion, there is a trap field around the Huazi, which is a field with a gradual change in spatial characteristics.It has the functions of compression, absorption, transformation, etc., and has a function of unidirectional action on physical quantities.The C and P of the material particles are determined by the conversion from energy via Huazi.In the conversion of the energy into the material particle, in addition to imparting the mass to the material particle, the state of the energy is also modified.This includes changing the spin of the energy, the wave state, and the spatial form.After the modification, the particles exhibit charge characteristics such as positive charge, negative charge or no charge in addition to mass, spin.The spatial characteristics of the Huazi and its trap field have the natural ability to destroy the conservation of the number of baryons and the invariance of C and CP, and the masses reduce the energy density in the space after the energy is converted into mass particles, causing the thermal equilibrium to deviate.Therefore, the Huazi and the trapping field can satisfy the three conditions proposed by A.D. Sakharov.The driving force of CP's spontaneous break is the result of the existence of the Huazi.Therefore, the negative space theory can reasonably explain the first driving force of CP breaking and the imbalance of positive and negative material quantities.
7,618.8
2019-05-05T00:00:00.000
[ "Physics" ]
Targeted regions sequencing identified four novel PNPLA1 mutations in two Chinese families with autosomal recessive congenital ichthyosis Abstract Background Autosomal recessive congenital ichthyosis (ARCI) is a rare genetically heterogeneous cutaneous disease predominantly characterized by erythroderma, generalized abnormal scaling of the whole body and a collodion membrane at birth. Numerous causative genes have been demonstrated to be responsible for ARCI including PNPLA1 which can cause ARCI type 10. The objectives of this study are to describe clinical features of three ARCI patients from two Chinese unrelated families and to identify the underlying causative mutations. Methods Genomic DNA was extracted from peripheral venous blood obtained from the two Chinese ARCI families in Shandong province. Subsequently, targeted regions sequencing (TRS) followed by Sanger sequencing was conducted to identify and validate the likely pathogenic mutations of the ARCI families. Results Genetic analyses revealed four novel PNPLA1 variants that are predicted to be probably to lead to ARCI in three patients of two families. Patient 1 in one family was in compound heterozygous status for c.604delC/p.Arg202Glyfs*27 and c.820dupC/p.Arg274Profs*15, whereas c.738_742delinsCCCACAGATCCTGC/ p.Gly247_Tyr248delinsProGlnIleLeuHis, and c.816dupC/p.Arg274Profs*15 were found in patient 2 and 3 of the other family. In addition, these variants cosegregate in the two pedigrees and are all within highly conserved regions of the PNPLA1 protein, which indicate that the four mutations are likely pathogenic. Conclusion Our findings not only broaden the mutational spectrum of PNPLA1, but also contribute to establishing genotype–phenotype correlations for different forms of ARCI. gions sequencing (TRS) followed by Sanger sequencing was conducted to identify and validate the likely pathogenic mutations of the ARCI families. Results: Genetic analyses revealed four novel PNPLA1 variants that are predicted to be probably to lead to ARCI in three patients of two families. Patient 1 in one family was in compound heterozygous status for c.604delC/p.Arg202Glyfs*27 and c.820dupC/p.Arg274Profs*15, whereas c.738_742delinsCCCACAGATCCTGC/ p.Gly247_Tyr248delinsProGlnIleLeuHis, and c.816dupC/p.Arg274Profs*15 were found in patient 2 and 3 of the other family. In addition, these variants cosegregate in the two pedigrees and are all within highly conserved regions of the PNPLA1 protein, which indicate that the four mutations are likely pathogenic. Conclusion: Our findings not only broaden the mutational spectrum of PNPLA1, but also contribute to establishing genotype-phenotype correlations for different forms of ARCI. K E Y W O R D S autosomal recessive congenital ichthyosis (ARCI), genetic analyses, PNPLA1, Sanger sequencing, targeted regions sequencing (TRS) | INTRODUCTION As a clinically and genetically heterogeneous skin disorder, ichthyoses are characterized by extensively dry and scaly skin almost covering the whole body and sometimes with erythroderma and a collodion membrane (Takeichi & Akiyama, 2016). It has been divided into two categories, syndromic and nonsyndromic ichthyoses, whereas the latter whose symptoms appear only in the skin, can be classified into common ichthyoses, autosomal recessive congenital ichthyosis (ARCI), keratinopathic ichthyosis (KPI), and other forms of ichthyoses (Oji et al., 2010;Takeichi & Akiyama, 2016). With an approximate prevalence of 1:200,000, ARCI is clinically classified as three primary subtypes including congenital ichthyosiform erythroderma (CIE, OMIM 242100), lamellar ichthyosis (LI, OMIM 242300), and the less common harlequin ichthyosis (HI, OMIM 242500) which is more severe than the other two subgroups (Esperon-Moldes et al., 2019;Karim, Murtaza, & Naeem, 2017). ARCI is transmitted in an autosomal recessive pattern with common clinical signs of generalized scales sometimes accompanied by erythroderma or a collodion membrane, although the phenotypes of affected patients may be greatly variable (Simpson et al., 2019). The corresponding coding protein PNPLA1 has 532 amino acids in length (NM_001145717.1) and is one of nine members of the PNPLA protein family that share a common patatin-like domain (Wilson, Gardner, Lambie, Commans, & Crowther, 2006). PNPLA1 consists of an entire patatin domain (residues 16-185) at the N-terminus and a proline-rich C-terminal domain (residues 326-451) where a hydrophobic domain ranging from Leu335 to Ser417 is located (PA et al., 2013). In addition, PNPLA1, mainly expressed in the keratinocytes of epidermal granular layer (PA et al., 2013), serves a significant role in the glycerophospholipid metabolism of the cutaneous barrier (Esperon-Moldes et al., 2019). A majority of PNPLA1 mutations usually involve in the N-terminal highly conserved patatin domain named mutational "hot-spots" region and most ARCI cases carry nonsense or missense variants (Diociaiuti et al., 2018;Karim, Ullah, et al., 2019). With the advantages of high throughput, high cost-effectiveness, fast speed and high accuracy, the targeted regions sequencing (TRS) technology is widely used in the auxiliary diagnoses and classifications of genetic diseases that have several disease-causing genes. Additionally, TRS aims to sequence the pathogenic genes of certain specific diseases, thus reducing the costs greatly. The predominant objectives of this study are to describe the detailed clinical features of three Chinese ARCI patients from two unrelated families and to identify the underlying likely pathogenic mutations responsible for the ichthyoses phenotypes, which could facilitate the genetic counseling for the ARCI family, therefore further improving the quality of the population. | Ethical compliance This study was approved by the Ethics Committee of the Affiliated Hospital of Qingdao University. | DNA extraction The genomic DNA was extracted from peripheral venous blood using a Qiagen DNA extraction kit, strictly following the manufacturer's protocol. The concentration of the genomic DNA was measured by a spectrophotomer (Thermo Fisher Scientific Oy Ratastie 2, FI-01620 Vantaa, Finland). The DNA was quantified with Nanodrop 2000 (Thermal Fisher Scientific). DNA fragments of 100-700 bp were obtained by random interruption of qualified genomic DNA using a Covaris crusher and then fragments with sizes ranging from 350 to 450 bp and those including the adapter sequences were selected for the DNA libraries preparation. The biotinylated capture probes (80-120-mer) were hybridized with DNA libraries under certain conditions. The magnetic beads modified by streptavidin covalently combined with the biotin labeled probes to capture the target genes. Subsequently, the magnetic beads carrying the target genes were absorbed by magnetic frame, washed off and purified for the enrichment of the target genes. Finally, The enriched libraries were sequenced on an Illumina NextSeq 500 sequencer for paired-end reads of 150 bp. | TRS for mutation detection Following sequencing, low-quality variations were filtered out using a quality score ≥20 and BWA was used to align the clean reads to the reference human genome (hg19). The identified SNPs and InDels were annotated using the Exome-assistant program (http://122.228.158.106/exome assis tant). The frequency >0.02 of SNPs and InDels in HapMap samples, 1,000 Genome, ESP6500,ExAC_ALL and ExAC_EAS were removed. The variants were evaluated by several bioinformatics software programs to predict their pathogenicity. | Sanger sequencing validation The PNPLA1 variants of the patients identified by NextSeq 500 sequencing were validated by Sanger sequencing. Genomic DNA from all available family members were obtained for Sanger sequencing. Amplified polymerase chain reaction (PCR) products were analyzed by gel electrophoresis, then purified and sequenced on an ABI PRISM 3730 genetic analyzer (Applied Biosystems; Thermo Fisher Scientific, Inc.) using the terminator cycle sequencing method. Loci of the mutations were identified through the comparison of DNA sequences with the reference sequences on the National Center Biotechnology Information (NCBI) website (https :// www.ncbi.nlm.nih.gov/). | Clinical manifestations P1 (Figure 2a-c) was born as a severe collodion baby with erythroderma. Whitish plate-like and dry scales can be seen all over the body accompanied by pruritus except for the face which usually exacerbate in spring and autumn. In addition, he also presents with increased cerumen, palmoplantar hyperlinearity and abnormal desquamation on the soles. However, the phenotypic features of P2 (Figure 2d,e) and P3 (Figure 2f) are similar and are milder than P1. Both of them began to develop skin lesions three months after they were born without a collodion membrane covering their bodies or apparent erythroderma. They manifest typical phenotypes of ichthyoses as generalized dry, fine and whitish scales which are more severe primarily at the turn of the seasons, but never with pruritus. Mild palmoplantar hyperlinearity can be also observed. No other members in their families have identical or similar signs and symptoms. All mutations are not annotated in HGMD, ESP6500siv2_ ALL, 1000g2015aug_ALL, Clinvar, ExAC, and dbSNP147 databases, and are absent in 100 healthy controls (Figures 3b,d and 4b,d). cosegregation analyses indicate that the inheritance mode of this disease in the families is consistent with the autosomal recessive pattern of ARCI type 10 and the four novel variants are responsible for the ARCI presentations of the three patients, thus suggesting the PNPLA1 mutations are likely pathogenic. PNPLA1 mutations Protein sequences of various species including Homo sapiens, Bos taurus, Canis lupus familiaris, Equus caballus, Felis catus, Mus musculus, Oryctolagus cuniculus, and Rattus norvegicus were obtained from NCBI website. Multiple sequence alignment of the PNPLA1 protein among these species was carried out using DNAMAN software and the results suggested that R202, G247, Y248, and R274 were all localized within the highly conserved domain of PNPLA1, but outside the core patatin domain (residues 16-185) ( Figure 5). | DISCUSSION In this study, we identified four previously unknown PNPLA1 mutations (three frameshift mutations and one in-frame mutation) through TRS in combination with Sanger sequencing in two unrelated ARCI type 10 families of Chinese origin. The frameshift mutations c.604delC, c.820dupC, and c.816dupC that are classified as likely pathogenic according to American College of Medical Genetics and Genomics (ACMG) guidelines with the criteria of PVS1 + PM2 + PM3 can result in PNPLA1 protein truncation or nonsense-mediated RNA decay with loss of protein expression from this allele which may have damaging impacts on the normal structure and function of the corresponding protein. Additionally, the three mutations can lead to the loss of PNPLA1 C-terminal region to a large extent, which is essential for protein activity (Karim, Ullah, et al., 2019). The in-frame variant c.738_742delinsCCCACAGATCCTGC is identified as a probably pathogenic mutation based on protein prediction tool and ACMG guidelines with the classification criteria of PM2 + PM3 + PM4. Moreover, the four novel variants cosegregate in these families supported by the finding that they were not found in 100 Chinese healthy individuals. As a subgroup of nonsyndromic ichthyoses, ARCI is a general term for a group of rare genetic cornification disorders, ranging from relatively mild to very severe in the severity of the disease, sometimes even life-threatening (Bastaki et al., 2017). ARCI can be divided into six forms on the basis of diverse clinical manifestations involving congenital ichthyosiform erythroderma (CIE), lamellar ichthyosis (LI), and harlequin ichthyosis (HI), Self-healing collodion baby (SHCB), acral self-healing collodion baby (acral SHCB), bathing suit ichthyosis (BSI) (Simpson et al., 2019). The frequent clinical phenotypes comprise a collodion membrane at birth, generalized fine or plate-like dry scales with whitish, dark gray or brown in color, erythroderma, and palmoplantar hyperlinearity. ARCI patients may also exhibit palmoplantar keratoderma (PPK), swollen hands and feet, anhidrosis, alopecia, nail abnormalities, and ectropion (Boyden et al., 2017). Autosomal recessive congenital ichthyosis is inherited in an autosomal recessive trait, therefore patients are homozygous or compound heterozygous for pathogenic bi-allelic mutations. In addition, ARCI has been classified into fifteen genetic forms based on their causative genes including PNPLA1, LIPN, CASP14, CYP4F22, ABCA12, NIPAL4, ALOXE3, SULT2B1, ABCA12, SDR9C7, CERS3, ALOX12B, ST14, TGM1 (Fachal et al., 2014), of which the underlying pathogenic gene responsible for ARCI type 7 has not been defined. Although the clinical features of ARCI patients resulting from PNPLA1 mutations vary, the degree of severity is milder than other subtypes of ARCI caused by mutated TGM1 and ABCA12 (Zimmer et al., 2017). PNPLA1, associated with ARCI type 10 phenotypes, was first described to be a causative gene for ARCI in humans and dogs by Grall et al. (2012). PNPLA1 has been demonstrated to be located on chromosome 6p21.31 and possesses genomic DNA of 71,433 bp. The protein encoded by PNPLA1 is crucial for generating omega-O-acylceramides (ω-O-AcylCers) in the maintenance of cutaneous integrity and barrier function, and belongs to the mammalian PNPLA family that contains a highly conserved core patatin domain which is ubiquitous in potato tubers (Grond et al., 2017;Kienesberger, Oberer, Lass, & Zechner, 2009;Zimmer et al., 2017). PNPLA family (PNPLA1-9) is one of the patatin superfamily members and serves a key role in diverse aspects of lipid metabolism and signal pathway involving triglyceride lipase, hydrolase, and transacetylase activities (Dokmeci-Emre et al., 2017;Pichery et al., 2017). PNPLA1 mutations have been identified in approximately of 3% patients clinically diagnosed with ARCI (Zimmer et al., 2017). To date, the definite associations between genotype and the phenotype of ARCI have not been clearly elucidated due to clinical heterogeneity. Musharraf Jelani et al. described a Pakistani family affected with ARCI, the first ichthyoses case caused by defective PNPLA1 in Asia. A homozygous missense mutation c.387C>A (p.Asp129Glu) lying in the highly conserved patatin domain was identified in the patients who presented with a collodion membrane at birth and fine whitish scales covered most of the body surface. Simultaneously, this novel variant was predicted to be damaging in silico analyses (Lee et al., 2016). In 2017, a Turkish ARCI family who carried a novel homozygous deletion mutation c.733_735delTAC (p.Tyr245del) in exon 5 of PNPLA1was reported, with the clinical characteristics of erythema, small whitish and light brown scales accompanied by pruritus, PPK, toenail dystrophy and distal onycholysis. This variant was localized at the extended patatin domain (amino acids 1-288) between core patatin domain and proline-rich region of PNPLA1, and was evaluated as deleterious by several prediction programs (Dokmeci-Emre et al., 2017). Consistent with the mutant locus, four novel PNPLA1 mutations identified in this study are all located at the extended patatin domain but outside the core patatin domain and are considered to be likely pathogenic. There is no radical therapeutic regimen for ARCI at present, and affected individuals could only undergo limited symptomatic treatment to relieve symptoms. One potentially effective therapeutic strategy including glycolic acid, 10%-20%, cream and a combination cream of lovastatin, 2%, with cholesterol, 2%, has been suggested to yield a satisfactory curative effect with improvement of the cutaneous condition of patients with ARCI (Khalil et al., 2018). In conclusion, we detected four novel probably disease-causing mutations in two unrelated nonconsanguineous ARCI families, which expand the mutational spectrum of ARCI type 10, and contribute to genotype-phenotype correlations, and further facilitate the development of genetic counseling of affected families. In addition, this study may laid a solid foundation for the further investigations of ichthyoses pathogenesis and genetic therapy. ACKNOWLEDGMENTS We are grateful to the patients, their families members, and contributors for their participation.
3,322.2
2019-12-13T00:00:00.000
[ "Medicine", "Biology" ]